-
Posts
334 -
Joined
-
Last visited
-
Days Won
8
Content Type
Profiles
Forums
Articles
Blogs
Store
Everything posted by Mawson
-
Trying to understand what we are getting into, some (hopefully) quick questions...
Mawson replied to PX-HexOS's question in OS & Features
1. Yes, HexOS is intended to be ran "bare metal" 2 & 3. HexOS has/will have a couple ways that you can run additional software: Docker, and virtual machines. Basically Docker containers are pre-packaged apps designed for easy virtualization, and a VM is what you would use if you wanted to run a full Linux, Windows, or OSX install. In addition to eventually supporting docker images in general, HexOS will have a catalog of docker apps that are 'curated' so that the install and setup basically just a single click. Currently Plex and Immich (photo backup) have been curated. 4. Currently features are pretty limited, but development will be ramping up in coming months. Current features include basic file server (NAS) duties via SMB, and the two curated apps mentioned above. There is a ton of stuff you can do in the TrueNAS interface, and lots of users here have been posting about their experiences with that so there are some guides to follow. Eventually though you shouldn't have to use the TrueNAS interface for much of anything. -
@SignedAdam Firstly, I want to congratulate you on your success in getting HexOS running on that ReadyNAS machine. That is an impressive bit of tinkering, even if it is not an ideal piece of hardware for HexOS! With that said, I think you may have some misconceptions about what HexOS' is, it's intended role, and how it works. This is simply incorrect. HexOS is a fully functional TrueNAS install, with the HexOS UI connector added on. It literally is TrueNAS in every sense. The structure of HexOS' software stack is this: Linux, with software packages such as ZFS on top, then the TrueNAS API. Above the API, HexOS and the TrueNAS GUI co-exist. They operate at the same level. If you need a NAS OS that is light on RAM usage then HexOS may not the best product for that use case. HexOS' mission is to make high performance NAS and home hosting accessible to the masses by adding ease of use to the existing power, security, flexibility, etc of TrueNAS (a core part of which is the ZFS filesystem). The only reason the HexOS project is able to do that without taking 10+ years of development is because TrueNAS is a mature and stable product. The team's mission is to bring TrueNAS to more users, not to expand TrueNAS' hardware compatibiliity. I understand that you want to be able to have your cake and eat it too, but in the case of using HexOS on something like a ReadyNAS box it may not possible to get useful performance. We can't win every battle, and in this case the hardware specs may simply be insufficient for a good experience. I fully support experimentation and trying things, and I'm very pleased to see that it is possible to get TrueNAS installed one those units, but I want to encourage you to be realistic about what is possible and not. We can't win every battle, and in this case the hardware specs may simply be insufficient for a good experience. So by all means, please continue to experiment! I want to see what you can accomplish! I just want you to go into it understanding that you're going off the beaten path and official support should not be expected! 😅
-
Eh... Portland is like 3 hours away, so not super close. There's just a ton of techies and tech companies around
-
And proud of it! 🤣
-
That's a fair point. I'm in the Seattle area so I may be a bit spoiled in that regard.
-
Unless I become fabulously wealthy soon the idea phase is about as far as it is likely to go 😆 Oh for sure! I'm sure it's possible to modify the oem expansion shell to allow more space. Even just a bit of thickness would likely allow for drives to be mounted on both sides of the PCB.
-
Also, for general shopping tips, don't neglect looking locally using FB marketplace, Craigslist, etc!
-
@ubergeek, at about 10:30 you mention putting 3.5" drives in 5.25" ODD bays, and I want to share some info I have on that There are actually mount adapters that allow for very efficient use of ODD bays for drives. There are all sorts, including for crazy numbers of 2.5" drives, but I'll stick with linking a few examples that work with 3.5" drives. I haven't used all of these personally, but the concept is the main thing I want to share. 2x 5.25" ODD to 3x 3.5" HDD Fixed mounting (CHEAP option!): https://a.co/d/0cKp1eu Hot Swap: https://a.co/d/ingfbWT 3x 5.25" ODD to 5x 3.5" HDD (yes, FIVE!) Fixed mounting (CHEAP option!): https://a.co/d/aTJ63CU Hotswap: https://a.co/d/2wbgm6G (I have this unit and it works great) Here are a couple links to manufacturers product categories for this sort of thing. Poke around on these pages and you can find some really weird stuff! IcyDock: https://global.icydock.com/ Docks like this is basically ALL they do! Silverstone: https://www.silverstonetek.com/en/product/storage/?filter=M2_Devices,25_Devices,35_Devices,Slim_optical_drives StarTech: https://www.startech.com/en-eu/hdd/mobile-rack
-
So I was thinking, it could be a fun project to build a storage module for the Framework 16 that goes in the same spot as the dedicated GPU. This is the Expansion Shell module. If I'm doing my math right there should be enough space between the fans to house at least two, possibly four 2280 m.2 drives. Hypothetically one could boot off the 2230 slot on the main board, and leave the onboard 2280 available to use for storage. That would get us to the minimum of 3 drives for ZFS. Using 8TB drives we would end up with a 16TB pool. If we can fit 4 drives in the expansion shell, then we're looking at 32TB! The expansion bay has an x8 interface, so that would give each of the drives out there a x2 connection. Though I don't imagine it supports bifurcation, so there would likely need to be a PCIe switch chip. ------------------------------------------------------------------ Aaand after writing all of this I've just discovered that FW has a dual m.2 reference design on for on their github! 🤣 So it's totally feasible, at least for 2 additional drives. https://github.com/FrameworkComputer/ExpansionBay/tree/main/Dual SSD Reference Design Getting 4 in there might not be possible without changing the thickness or other dimensions of the module... But anyway, I hope some of you enjoy the solution looking for a problem that this ADHD tangent of mine has created. LOL
-
Small word of caution: By starting with only 2 drives you won't have the ability to add more to the pool later via RAIDz Expansion. That's totally fine, I just want to make sure that if you are imagining expanding your pool later you know that you will have to completely rebuild it to add the 3rd drive, since it has to change from a mirror to ZFS. IMO that's probably the one thing I would have changed about the initial LTT video would have been to have them use 3 drives to start.
-
Is there no way to get notifications for replies?
Mawson replied to frustrated's topic in Forum Issues
Oh! and you can follow users from their profiles. -
Is there no way to get notifications for replies?
Mawson replied to frustrated's topic in Forum Issues
-
Aww... it's adorable! Pretty amazing how much compute and functionality can be put in such a small package these days.
-
So you moved all drives to the new build(s), including the boot drives? @Creative0100, it sounds like HexOS doesn't care as long as the drives (including the boot drive of course) are present
-
Sick. I love the look of that hardware!
-
Interesting... @jonp, this seems like something that could use further digging into...
-
That link directs back to this thread
-
Very curious about this also. What shows up if you click "replace" ?
-
As far as physical placement, I don't see any reason that you would need to separate the drives... better to keep them congregated so that if/when you add more drives you don't have to move the existing ones. (though even if you needed to it should be fine since the drives are id'd by their unique id's, which means it shouldn't matter if they change which port they are on). Separating the drives would likely have some impact on performance and longevity, but not enough to worry about in my opinion. Hard drives are pretty much a solved science at this point, and they are designed to operate in proximity with each other. Enterprise drives may be built to handle this better than consumer units, but in a small chassis use case like this even that probably won't make a significant difference. Physical placement won't affect the pool creation directly, that would be more related to which order you plug your SATA cables into the back plane. Even then, it doesn't really matter, it just may change which drive exactly is 'sda', 'sdb', etc. Are you planning to use a HBA card. or native SATA ports off of your motherboard? For 10+ drives I imagine you would need an HBA eventually, but if you are starting with only 4 plus a boot device you could probably do that on most motherboards. If you are using an HBA you may be more easily able to plug the cables into the backplane in numerical order, which in theory would set things up so that your left most drive would start as 'sda', and then proceed through the alphabet, with drive no. 10 being 'sdj', and the boot device potentially being 'sdk'. (And no, I totally didn't have to count out the letters of the alphabet to get that answer... why would you ask such a silly question?? 🤣)
-
That's kinda awesome 😆
-
So you don't want a 42U rack in your living room?? 😆
-
-
Aka, never. Temporarily solutions are my favorite kind of permenant! 😂
-
You should throw caution to the wind and start by getting a case with plenty of drive bays... Perhaps something like the Sliger CX7302. (10) trayless 3.5" bays, (4) internal 2.5" mounts, 3U rack mountable x 18" deep, Micro ATX, SFX/SFX-L PSU, color choices for the front panel... its a slick little chassis. https://sliger.com/products/rackmount/storage/cx3702/
-
That should be totally fine!