Jump to content

Sonic

Members
  • Posts

    464
  • Joined

  • Last visited

  • Days Won

    14

Everything posted by Sonic

  1. Sonic

    Lincstation N1

    When I install HexOs in a Proxmox VM I'm running into this issue. I have two Proxmox machines: one using SATA passthrough and the other using NVMe passthrough. When I attach the disks to the VMs and start the installation, the HexOS installation freezes during pool creation. However, if I install HexOS without the disks already attached, the installation completes successfully. After restarting HexOS, I can add the disks and create the pool without any issues. Perhaps this is also related too your error message.
  2. Sonic

    Lincstation N1

    My health dashboards are OK. Did you get this message during the setup?
  3. Hi @Dylan, this is always an interesting dilemma. It mainly comes down to how much regret you'll feel if you lose your data, in other words, the cost of losing it. RAID-Z is not a backup; it ensures continuity. @PsychoWards makes some great suggestions. I agree that having two NAS devices in RAIDZ1 with buddy backup/replication gets you a long way, especially when combined with an offline backup. I also find it challenging to determine the right balance between extra resilience and extra costs for my own setup. I've solved this by categorizing my data. My most important data, such as photos and critical documents, are stored on my Synology NAS (RAID 6), with a daily "offline" backup to the cloud. For the rest, I'm fine with RAIDZ1 and use Proxmox Backup Server for copies. This approach is more focused on getting back up and running quickly if a device fails. The tricky part is that you only truly know if you’ve set things up properly when disaster strikes.
  4. Good luck with your search! Use the AMD device as a backup to the N100 and for all other NAS stuff sounds like a good approach. Your N100 is your Plex media server and the AMD version is your backup / other NAS stuff powerhouse.
  5. In this version of HexOs it's not possible to rename the HDD pools. A lot of people already requested the feature of renaming pools.
  6. Hi @Dylan, I would go for the Ryzen 7. Much more CPU power, more memory and 2 M2 SSD slots. I also have a Shuttle DL30N with a N100. I can run a good performing Windows 11 VM on my Aoostar. On my N100 server it works, but performs very slow. The only con I can think of is the transcoding in Plex or Jellyfin. I think the N100 wil perform better. See 11:26 of this review: https://www.youtube.com/watch?v=Ct4yewC7mKA
  7. It will work!. If you follow @Mobius and you go directly to 10 gbe. You can buy a 5 port 10 gbe for approx. 200 euro and a 8 port 10 gbe switch for approx 350 euro. One thing to keep in mind, not all 10 gbe desktop switches are fanless. Some are really noisy. If have one remark about your network design. You are daisy chaining you network. In theory you are limiting your bandwidth. But in your case it's purely theoretical. I don't expect that u will have more then 50 concurrent users @ home 🙂. In the past with 10 MB or 100 MB switches it could be a problem, but with 2,5 gbe or 10 gbe it won't.
  8. Here is an update about the Minisforum N5 and N5 pro. Expected in Q2 2025. The N5 pro also has ECC memory. https://nascompares.com/2025/03/28/minisforum-n5-and-n5-pro-nas-new-update-new-version-new-os/
  9. The price is more then reasonable. Drivers will be the challenge for Mavell nics. They are focussing mostly on Windows drivers. It's good to check some forums like Reddit upfront. If you search on Truenas ans AQCL113 you probably will find a lot of comments. If the right driver is available, it will be a good upgrade!
  10. @Dylan, not off topic at all. It's very relevant. 🙂 Robbie is mentioning the price difference between 10 gbe adapters and 5 gbe adapters. that difference is huge. Especially when you have only a few HDDs in your device, it's difficult to saturate your 10 gbe connection. In that case is useless to invest in 10 gbe
  11. Doing hardware research is always the fun part 🙂. Mobius is right too. A lot of people skip straight to 10 gbe. But every environment is different and everyone has different preferences. The most important thing is that it fits your needs within your budget. Please feel free to ask questions. It's always nice to share thoughts about hardware choices.
  12. @jonp, In case of HDD faillure it would great if HexOs guides the user with the HDD replacement. A small wizard with a few questions and then advice: do this, do that. Especialliy for non tech users it will make a big difference. Users will be nervous in case of HDD faillure. I think simple, low effort feature with big impact. Just my 2 cents.
  13. BTW, I have a mixed 1 gbe, 2,5 gbe and 10 gbe network based on Mikrotik and I am quite happy with it. It's really future proof and didn't cost a fortune. But I have to say that RouterOs has a steep learning curve. In that sense Ubiquity and TP Link are easier to start with. Just my 2 cents. Good luck with your search.
  14. If you go for the no-name/brandless stuff, I recommend doing thorough research. I personally don’t buy devices without CE and FC certifications. It might seem cheap, but a fire in your house is quite costly too. And I’ve seen plenty of dangerous internal power supplies come by. No thanks 🙂 And just search Google for security issues with no-name managed routers and switches. The solution you’re proposing yourself—a new wireless router with 10 GbE and two 10 GbE 8-port switches—will definitely work. But it won’t be cheap either. If you’re making such an investment anyway, I’d suggest considering a more semi-pro setup. You’d get a nice management interface and additional security options as a bonus. An important question to ask yourself is which devices truly need 10 GbE. The step from 1 GbE to 2.5 GbE is already a big leap forward. For a printer or an old PC or iMac, it’s pointless to fully upgrade to 10 GbE. And peripherals like USB network cards show a big price difference too. A USB 2.5 GbE or 5 GbE NIC can be bought for $30 or $40 (or euros). A USB 10 GbE NIC easily costs $200. If you look at a solution where the standard network speed is 2.5 GbE, where the router has a 10 GbE port, and you place a 4-port 10 GbE switch where you actually need it, brands like Ubiquiti, TP-Link, or MikroTik are quite feasible.
  15. Hi @Ferkner, I will take a closer look tomorrow and share my thoughts about possible network options
  16. Can you specify your needs more in detail? Do you want a managed or unmanaged switch, how many ports do you need, what is your budget, ......?
  17. On the website servethehome.com you can find a lot of reviews of affordable switches. Both 2,5 gbe and 10 gbe. And both noname and respectable brands. Really helpfull if you buy new network equipment.
  18. Sonic

    Lincstation N1

    BTW, a first view of Datacenter Manager https://www.youtube.com/watch?v=MxRWijchp3M
  19. Sonic

    Lincstation N1

    One of the things I am struggling with is my docker strategy. It's about the question: All Docker containers in 1 VM or 1 VM per container? See also this link: https://forum.proxmox.com/threads/all-docker-containers-in-1-vm-or-1-vm-per-container.141367/ At the moment install 1 docker in 1 LXC container. I did this without any strategy upfront. First I had to face the learningcurve of installing docker. But now I have a few dockers which I use daily, it's good to think about it. In this setup I can backup and restore individual dockers without influencing other dockers. This is for me a pro. But it's creating extra overhead. That's a con. Do you have any thoughts about the best docker strategy?
  20. Sonic

    Lincstation N1

    I like your setup! Certainly future proof. Do you also have a 19 inch rack or only a 19 inch case. I don't have space for a 19 inch rack, so I have my focus on 10 inch rack devices. Is there a specific reason you use portainer? I am more in favour of Dockge. Datacenter manager is only a first alpha release. Perhaps I will give it a try when it's in beta. Now it's to early.
  21. Sonic

    Lincstation N1

    The Lincstation N2 has 16GB LPDDR5 (Non-upgradeable), so 16GB it is. I already use CPU type host, and the i440fx is a click, click, next mistake, ha ha 🙂 . But for my test setup it's good enough and i didn't feel the need to change it, because it works. But I will use Q35 in my final setup. Truenas is also improving the VM engine and will become more and more a virtualisation platform. But for now the sweetspot of HexOs / Truenas is still good and reliable network storage. Proxmox gives me a lot of flexibility and is my virtualisation platform of choice. In the past I used ESXi, but since the Broadcom takeover I moved to Proxmox. vCenter was a nice tool and Proxmox is developing something like that: https://pve.proxmox.com/wiki/Proxmox_Datacenter_Manager_Roadmap Are you still using Proxmox? Or did you fully move to HexOs/Truenas? I am also curious about what you are using as server hardware.
  22. Sonic

    Lincstation N1

    I use the default settings in Proxmox: SeaBIOS and i440fx. In my test setup, I also have a PBS VM. Ideally, PBS requires local storage, but it also works with NFS and SMB shares. I'm currently testing with a HexOS SMB share and a Synology NFS share. So far, it works well, but NFS sometimes suddenly becomes very slow. I'm still undecided about what I'll run on the Lincstation N2. Right now, I have a Shuttle DL30N with an Intel N100 and 32GB RAM as my always-on Proxmox server. I run several Docker containers, including Homepage. The main question is whether the N2 with 16GB RAM will be enough to run: ✅ A HexOS VM (8GB RAM) ✅ Several Docker containers I think it will work fine, but I’ll have to test it! 😊
  23. This week, I reconfigured my Lincstation N1 with Proxmox and HexOS in a VM. This is a temporary test setup, mainly to experiment with NVMe passthrough. Spoiler alert 😊: It works! For over a year, I had TrueNAS running on it, and that worked perfectly fine as well. Lincstation N2 – My Future Setup I backed the Lincstation N2 on Kickstarter. With the 30% early bird discount, it costs $309 / €329. 🔗 Kickstarter Link Eventually, I’ll use the N2 for my final setup, while the N1 will become my Proxmox Backup Server. Lincstation N1 – Specs Intel Celeron N5105 (4 cores) 16GB RAM 128GB ROM (not used) 2× 2.5" SATA bays (2× 500GB Samsung 870 EVO SSDs) 4× PCIe M.2 2280 slots (4× 2TB Samsung SSDs) 2.5GbE NIC Installation Steps – Proxmox & HexOS (NVMe Passthrough) 1️⃣ Install Proxmox Download the latest Proxmox ISO and create a bootable USB using Rufus. Boot from USB and install Proxmox. Installed on two 500GB SSDs (btrfs mirror setup). After installation, access Proxmox via the web interface. Run some post-install steps: Post-install helper script: 🔗 Proxmox Post-Install Script Install the latest Proxmox updates. 2️⃣ Install HexOS in a VM (NVMe Passthrough Setup) Download HexOS ISO and upload it to Proxmox. Create NVMe passthrough mappings: In Datacenter → Resource Manager, create 4 NVMe mappings (NVME1, NVME2, NVME3, NVME4). Create a new VM: BIOS: SeaBIOS Disk: 50GB HDD RAM: 8GB CPU: 2 cores (host) Network: Virtio CD/DVD: Connect the HexOS ISO (At this stage, do not attach the NVMe SSDs yet.) Boot the VM and complete the HexOS installation. Shutdown the VM. Attach NVMe SSDs: In the hardware tab, add the 4 NVMe SSDs as PCI devices. Boot HexOS again: If everything is correct, HexOS should detect all 4 SSDs, allowing you to create a storage pool. Done! 🎉 Special thanks to @Dylan and @PsychoWardsfor encouraging me to share more about my homelab! 🚀 Also a picture of my N1. It's fits perfectly in my 10 inch rack. (BTW, the other device is a NUC 11 pro)
  24. a lifetime license is doubled in price. That's way to much if you ask me
  25. You never can have to much memory. If you have 64 gb in your machine, leave it like that. 😀. You also have buy new memory if you sell the 64GB. And that for a few $ or euro "profit"
×
×
  • Create New...