Jump to content

Recommended Posts

Posted

FOR EVERYTHING.

Ok jokes aside at work we have a 4 server cluster for a computer node "note each box is a dell R940 quad Xeon with 1tb of memory per box" also we run a 5 server cluster for ceph storage. This handles a lot of production VM's and we have a isolated sandbox for other things . We also take advantage of using Proxmox backup server "colocated" for all that we do in our DC. as far as at home I use proxmox for my remote gaming computer and home DB server, Another Proxmox server of mine is Co-located at a friends house that is being used as production and dev box. 

  • Like 1
Posted

I'm running an Ubuntu VM which is hosting all of my docker containers, it's running Hexos to do NAS things and I will migrate my Windows Server over to proxmox as well, the windows VM will only run when I need to do windows thingy things. (Mainly Gameservers)

  • Like 1
Posted

I use one NUC 11 with Proxmox to run Windows 11 VDIs. Additionally, I have a Mini PC with Proxmox and an Intel N100 (low power), which I use to run my Docker containers. This machine is always on. My third machine is an Aoostar WTR Pro, on which I run HexOS with SATA passthrough and a Proxmox Backup server.

These three machines are the "production" setup in my homelab. Additionally, I have one more PC with Proxmox for testing purposes.

  • Like 1
Posted

We are actually building a Proxmox setup to run multiple HexOS VMs for additional developers we are hiring (for access to dev/test).  We're actually newbies at Proxmox, but it looks pretty straightforward!

Posted
21 minutes ago, jonp said:

We are actually building a Proxmox setup to run multiple HexOS VMs for additional developers we are hiring (for access to dev/test).  We're actually newbies at Proxmox, but it looks pretty straightforward!

Once you figured out the best/optimal settings for your use case(s), could you possibly release a recommended/curated settings guide, I would be very interested in this. It probably will not be the optimal configuration for every single usecase but probably already a good baseline for most cases, especially since most Youtube videos concerning setting up Truenas recommend outdated and suboptimal settings.

Posted
40 minutes ago, PsychoWards said:

Once you figured out the best/optimal settings for your use case(s), could you possibly release a recommended/curated settings guide, I would be very interested in this. It probably will not be the optimal configuration for every single usecase but probably already a good baseline for most cases, especially since most Youtube videos concerning setting up Truenas recommend outdated and suboptimal settings.

Yeah its simple but once you get into clusters and Vm management inside of that cluster it gets fun 

Posted
49 minutes ago, PsychoWards said:

Once you figured out the best/optimal settings for your use case(s), could you possibly release a recommended/curated settings guide, I would be very interested in this. It probably will not be the optimal configuration for every single usecase but probably already a good baseline for most cases, especially since most Youtube videos concerning setting up Truenas recommend outdated and suboptimal settings.

We can do that, but our use-case for internal dev/testing of UI/UX/Deck is a little unique.  We will use fully virtual storage devices for all of this, which means we're not going to be doing testing in this environment against hardware issues with storage devices.  We test storage device failures/replacements/expansions mainly with dedicated physical hardware.  That said, it would be possible for us to pass through the entire storage controller to VMs for a more native experience.  Might need to ping Tom Lawrence for some advice on this setup 😉

Posted

Running HexOS in a Proxmox VM is straightforward, and it would be easy to create a simple guide for that.

Hardware passthrough, however, can quickly become more complex. When it works immediately, it seems simple. But when you encounter errors, it often leads to lengthy troubleshooting and frustration. Especially if you’re not experienced with Linux or virtualization. Consumer-grade hardware sometimes lacks full BIOS support for virtualization, particularly with older hardware. For older server hardware, it’s a different story.

SATA passthrough is relatively simple. Under "Datacenter" in "Resource Mappings," you can easily map the SATA controller(s). When creating the VM, you can then add these as a PCI device.

Posted
7 hours ago, Sonic said:

Hardware passthrough, however, can quickly become more complex. When it works immediately, it seems simple. But when you encounter errors, it often leads to lengthy troubleshooting and frustration. Especially if you’re not experienced with Linux or virtualization. Consumer-grade hardware sometimes lacks full BIOS support for virtualization, particularly with older hardware. For older server hardware, it’s a different story.

QFT!  VFIO and specifically GPU pass through has been a major focus of mine for almost a decade of my career.  VFIO is awesome, but a lot of hardware is just not built with it in mind.  For example, when you shut down a VM, you're not really shutting down any hardware.  But a lot of GPU vendors didn't bother to put a reset mechanism into the GPU because the reset would be a full shutdown or reboot of the physical hardware.  So when you go to start a VM back up with a GPU like that, it just hangs or doesn't initialize the display.  Not sure how prevalent that is today, but between 2014-2022, I remember it being all too common.  Thankfully projects like this one (vendor-reset) have added a plethora of quirks that help overcome this for troublesome equipment, but it's not a silver bullet.

Other complications can arise just by turning on the IOMMU setting in the BIOS.  I remember there was a time where if you had a Marvell storage controller and you just turned IOMMU on, the whole host would crash.  Not sure if that's still the case in 2025 with more modern stuff, but it wouldn't surprise me if there aren't still some examples of that out there.

Bottom line:  I don't think most hardware manufacturers fully test virtualization (especially IOMMU) on consumer-grade electronics.  So "your mileage may vary" is an appropriate statement here.

I swear I could write a novel about VFIO, GPU pass through, and IOMMU groups...

Posted

Currently running a Hex VM & a TrueNas VM as waiting until Pool import is native before reinstalling all onto Hex bare metal. (had an existing pool prior to the launch)

Other than some install issues (documented workaround elsewhere) no issues to date.

Posted
18 hours ago, jonp said:

We are actually building a Proxmox setup to run multiple HexOS VMs for additional developers we are hiring (for access to dev/test).  We're actually newbies at Proxmox, but it looks pretty straightforward!

I feel like someone needs to do a test of Hex-Ception! How many layers down of installing Hex os VM's can you go before it breaks haha!

  • Baremetal Hex Install
    • Hex VM #1
      • HexVM #2
        • Hex VM #3...
  • Haha 1
Posted
3 hours ago, jonp said:

QFT!  VFIO and specifically GPU pass through has been a major focus of mine for almost a decade of my career.  VFIO is awesome, but a lot of hardware is just not built with it in mind.  For example, when you shut down a VM, you're not really shutting down any hardware.  But a lot of GPU vendors didn't bother to put a reset mechanism into the GPU because the reset would be a full shutdown or reboot of the physical hardware.  So when you go to start a VM back up with a GPU like that, it just hangs or doesn't initialize the display.  Not sure how prevalent that is today, but between 2014-2022, I remember it being all too common.  Thankfully projects like this one (vendor-reset) have added a plethora of quirks that help overcome this for troublesome equipment, but it's not a silver bullet.

Other complications can arise just by turning on the IOMMU setting in the BIOS.  I remember there was a time where if you had a Marvell storage controller and you just turned IOMMU on, the whole host would crash.  Not sure if that's still the case in 2025 with more modern stuff, but it wouldn't surprise me if there aren't still some examples of that out there.

Bottom line:  I don't think most hardware manufacturers fully test virtualization (especially IOMMU) on consumer-grade electronics.  So "your mileage may vary" is an appropriate statement here.

I swear I could write a novel about VFIO, GPU pass through, and IOMMU groups...

So you know the pain and frustration 🙂. But if it works it's great

Posted
1 hour ago, Sonic said:

So you know the pain and frustration 🙂. But if it works it's great

Exactly!  It's one of the reasons I want to build a comprehensive hardware database for HexOS (obviously opt-in).  There are a lot of details with hardware that aren't exposed when buying that we could start to map for folks who took the plunge without the insights.  Boot up, run a little scan of how the IOMMU groups layout, what happens when you try to pass through a GPU, etc. and then create a filterable database for users to search and get more details.  So many ways to improve UX here beyond just the software.

  • Like 5
Posted
On 1/12/2025 at 6:18 AM, Sonic said:

I use one NUC 11 with Proxmox to run Windows 11 VDIs. Additionally, I have a Mini PC with Proxmox and an Intel N100 (low power), which I use to run my Docker containers. This machine is always on. My third machine is an Aoostar WTR Pro, on which I run HexOS with SATA passthrough and a Proxmox Backup server.

These three machines are the "production" setup in my homelab. Additionally, I have one more PC with Proxmox for testing purposes.

How are you liking the Aoostar? I just picked mine up to run HexOS on and so far am very pleased with it?

Posted
22 hours ago, jonp said:

Exactly!  It's one of the reasons I want to build a comprehensive hardware database for HexOS (obviously opt-in).  There are a lot of details with hardware that aren't exposed when buying that we could start to map for folks who took the plunge without the insights.  Boot up, run a little scan of how the IOMMU groups layout, what happens when you try to pass through a GPU, etc. and then create a filterable database for users to search and get more details.  So many ways to improve UX here beyond just the software.

I really like this approach. It may not catch everything but it will enable a large portion of users that would only grow over time.

  • Like 1
Posted

I mainly use proxmox for LXC containers. I have 10 LXCs and 0 VMs. 😅

One LXC is used by my dockerstack (I know it's not best practise) it's just so convenient to not give docker full access to your system but also not run a VM for docker alone.

I think that they are the best mix between VMs and docker containers. I hope that HexOS will support them too in the future now that they will be included in the next TrueNAS update!

Posted
15 hours ago, Dylan said:

How are you liking the Aoostar? I just picked mine up to run HexOS on and so far am very pleased with it?

Hi Dylan,

I'm really happy with it. It feels very solid and stable. Of course, there are a few things that could be improved, such as making the HDDs hot-swappable and adding 10 GbE networking. But at this price point, the WTR Pro is more than okay. It offers a great balance of performance and energy efficiency. This is an ideal machine for starting with a home server.

I have some experience with Chinese mini PC brands, and often their customer service, BIOS/software quality, communication, and documentation are not up to the standard we’re used to in the EU/US. That said, Aoostar is doing a great job trying to improve in these areas. I don’t think they’re a very large company yet. By the way, the BIOS on the WTR Pro is very extensive.

In summary: very satisfied!

By the way, it seems there’s a 6-bay WTR with 10 GbE coming in Q1 2025.

  • Thanks 1
Posted
On 1/14/2025 at 9:30 PM, jonp said:

Exactly!  It's one of the reasons I want to build a comprehensive hardware database for HexOS (obviously opt-in).  There are a lot of details with hardware that aren't exposed when buying that we could start to map for folks who took the plunge without the insights.  Boot up, run a little scan of how the IOMMU groups layout, what happens when you try to pass through a GPU, etc. and then create a filterable database for users to search and get more details.  So many ways to improve UX here beyond just the software.

I like the idea of a comprehensive hardware database for HexOS. Count me in for support!

What do you think about collaborating with hardware vendors? You could aim for HexOS-certified systems. HexOS is simple to use, and pairing it with hardware that’s affordable and just works out of the box would be a big win. Especially if the price point is lower than Synology or QNAP, it would make HexOS very appealing to the average user.

iXsystems already provides TrueNAS hardware, but their target audience and support level are very different.

Personally, I enjoy researching and tweaking hardware, but most of my friends and family just want tech that works seamlessly. Combining HexOS with pre-certified, plug-and-play hardware could make it the go-to option for those users!

Posted
On 1/14/2025 at 3:30 PM, jonp said:

Exactly!  It's one of the reasons I want to build a comprehensive hardware database for HexOS (obviously opt-in).  There are a lot of details with hardware that aren't exposed when buying that we could start to map for folks who took the plunge without the insights.  Boot up, run a little scan of how the IOMMU groups layout, what happens when you try to pass through a GPU, etc. and then create a filterable database for users to search and get more details.  So many ways to improve UX here beyond just the software.

That is an awesome idea but would be very time intensive. With all the different system out there. It would be a major undertaking but in the long run well worth it. I design and program automation control equipment. At this time we are fighting the older design and firmware equipment with the new designs and its such a pain getting it right where old and new work together. I don't envy you doing that at all.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...