Jump to content

Ornival

Members
  • Posts

    34
  • Joined

  • Last visited

Everything posted by Ornival

  1. Not quite sure, because this is only a partial (system) log, but my best guess is that Immich fails to initialise start, because you skipped parts of the onboarding. By skipping initial user and share creation setting, and manually setting these afterwards (or from TrueNAS) docker locations and permissions aren't updated. It is best to "just reset" and start all over. You can do it without resetting from within TrueNAS, but why bother? Fixing takes more time than just restarting from scratch.
  2. No good for hunting then! 2080Ti seems a natural choice for your setup 🙂
  3. Wow! That quick reply escalated rather fast...My experience is just n=1 and should not be applied in general ML/HPC considerations. @PsychoWardsUhm...just follow your original idea as you already put some thought in it...nothing wrong with your initial approach 😉
  4. Ah, thanks for the extra info and now I think I get where you're coming from! You did do some research! 🙌 I am not sure if you're game for some tinkering, but P40 has better FP performance due to Quadro drivers and you might find a better deal if you switch to a hypervisor and unlock the potential of the vGPU capabilities with patched drivers on a consumer card. (Mentioned Tesla P40 has GP102 die with native vGPU driver support.) For reference: one machine hosts PROXMOX hypervisor on i5-12500T, 128GB RAM and GTX1080 Ti (also GP102). Main reason for using GTX1080 Ti is exactly because of the similarity with Tesla P40, albeit with only 11GB vs 24GB for P40! As long as your ML jobs or models don't exceed 11GB, you won't experience severe penalties for it, if any. The GPU IDs as a P40, assigned within PROXMOX to a VM, where I specified several mdevs for different use cases in different VMs.. My host also serves my HexOS testVM (containing Immich), dedicated Plex (iGPU passthrough), Home Assistant OS, docker/kubernetes and many more. Since you CPU is (much) more powerful, I expect much better results/performance in comparison to my machine. Only buy Tesla P10/P40/T10/T40/Quadro P6000/RTX6000/RTX8000 if you can get a sweet deal on it or really in need of 24GB, otherwise 1080Ti 11GB, Titan X Pascal (both GP102), 2070 Super (TU104 version!), 2080 non-TI/Super (TU104) or even 2080 Ti (TU102). Here in the Netherlands a used 1080 Ti is about € 175-200,- and 2080 Ti about € 300-400,- But still: only buy extra GPU if you need it. My 12500T has much less performance, and still my Immich app in HexOS processes al my pictures just fine on CPU alone. Only time when HexOS feels 'laggy', is when I haven't synched my pictures for a while and the bulk upload strains my wireless network. The dedicated GPU became "free" after my upgrade to AMD RX 6950XT, and just haven't found a need to assign a mdev to a VM yet. (All of the above only applies to your/my mentioned use case, of course. I have triples of Nvidia Tesla M40 24GB and AMD MI25 16GB for jobs/applicatons that do need more or benefit from more VRAM.)
  5. (I don't dabble in AI, but I have a couple of clusters running model rendering and CAM/CFD simulation jobs) If current setup fulfils your need, no need for extra HW or peripheral for offloading. Without knowing your current setup, I think you will have to run against some issues before needing more processing power. Intel, AMD and Nvidia current gen can all process some sort of AI/HPC offloading. Once you do expect a bottleneck, you'll probably have a much better idea what process needs more offloading/acceleration. It really depends on your use case(s). You can then lookup what you need. If you really want to buy something now and price is of no concern: You can't go wrong with CPU processing power (higher core/thread count), RAM and any current gen GPU. Intel Arc B-series, AMD RX 7000 and Nvidia RTX 4000 have acceleration and offloading capabilities (depending on your field or software requirements) and should be mentioned in any software usage guide lines. Nvidia has better support in general, so an RTX (or RTX Quadro) will always be useful. Usually more cores (TMU/ROP), more VRAM is better. On the other hand: buying (expensive) equipment now, really doesn't save money in the future or even yield better results, if any. If you don't need it now, you won't need it in the near future, and on the horizon there may be better options available. It depends mostly on the software support and requirements, I guess. Just beware: It (again) really makes no sense buying a RTX 3080 (for example) if you have no use for it now, because your application or software stack might not be optimal. Current lightweight applications rarely diverge into resource hogs... Plex/ffmpeg seems a good example: Intel iGPU. up to current gen already outperforms any dedicated AMD/Nvidia GPU in realtime transcoding quality, and even when AI optimisation/processing comes into the picture, you can still offload the processing (via docker application container) to a remote processing node (wether CPU or GPU supported). I don't see a future path, where these applications by themselves would support/warrant realtime image enhancements via AI. Or in the case of Immich: both CPU and GPU (Nvidia) load balancing is supported, you are most likely already to be using your current setup without bottlenecks. I find It unlikely that you would put a better CPU/GPU in your Immich server, just to "future proof" your current server machine. You are more likely to just put your current GPU in the server, and have it do it's thing in the background. You can then treat yourself to a shiny new GPU for gaming, and use that if you. need to help Immich a bit, but for the most part Immich can do it's thing on CPU just fine. Maybe your Immich example is just the wrong example, but my advice, in case that was not expected 😉 is to just wait until you have a clear target. Machine learning is already ubiquitous, long before AI and machine learning became trendy to the general public. You are not missing out on anything, because you are already on the train and unless you are unhappy with your seat, there is no need to upgrade your seat ticket.
  6. I don't think so? ZFS pools can be configured in many ways. I'd rather have them not offering it in HexOS at all, than half-ass-ing through it.Since there are plenty go guides doing it the the regular way, nothing is preventing us to do in within TrueNAS. AND in all fairness: it is a beta, and all current users/participants' job is to test, break, report and hopefully recommend? I get HexOS team focusses more on the core aspects of the roadmap, in case the active team is just small, so I am just trying to break my VM HexOS through "normal" use as much as possible...
  7. Fair. Would have liked an autio import or at least an indicator of some sorts. Like: "Foreign Pool detected". Too bad most of the beta participants all have some experience in handling poolconfiguration (and setup and find some other way), because I would have loved to see an equivalent of the easy Synology DSM way of prepping disks on a running system. *fingers crossed it arrives prior to 1.0
  8. Too many topics already, so I may have overlooked. Regardless if a pool was created on the machine itself: Would it br possible to autodetect new (data containing) disks or a config? I can re-insert a set of disks in TrueNAS underlying, but it won't show in Deck. If the API simply doesn't exist (ye) in NexOS GUI, I can understand why the pool or set is not exposed, but it seems such a trivial step to show the occurrence of a newly detected disk or set.
  9. I would love to simplify or automate uploads from (guest) users to a shared folder with customised conditions. Now I need to define specific groups and user and separate (home) folders, where an uploaded file goes to first for review, before releasing it and/or leaving the (RW/own) settings. Something like a menu where you can choose or make a selection of actions on files/folders/share/users/groups etc. Then all is needed just a condition to trigger the automation or a button click. (Context: Me and my friends started making our own (steering motorised) kites/hang gliders. Design started in AutoCAD and we pretty much loved the AutoDesk ecosystem and now nearly all is done within Inventor and Nastran. Flight control (orientation, positioning and alignment) is realised by using several sensors and actuators.) I share my git and AutoDesk Vault (like git but for parts, assemblies and other CAD files) with various people - some I don't even know personally - and streamlining the collaboration is sometimes a hassle because someone else managed to work on a file/project and creating an unintended fork, resulting in wasted effort. Both git and Vault have their own workflow, which would be fine if there were clear boundaries between me, my (own) groups and those exceeding my ring of trust. A known user can create a new user, which automatically falls in this user's group, with similar permissions to read from/contribute to files from the master. In Vault you can withhold any changes going upstream until the new version is approved., but that makes the new version folder locked/inaccessible for anyone else than the approved supervisors/superusers if the shared folder and files live outside of AutoDeks Vault. Currently I mitigate this by copying everything over to a separate share... ACLs would normally be the way to go (on a Synology), but I need to manually change permissions and group members every time, when I/we need to synch both the code in git and the assemblies in Vault to review the impact of the changes. I tried scripting these permission changes on both debian/TrueNAS, but while it makes no sense allowing a non-programmer to trigger the script, it is needed to allow key members to unlock/unfreeze the CAD project (or better: the projectiles that live within Vault) to work on some other features. I would not ask for this very niche n=1 feature if I was an apt programmer in any other language than beer, cigarettes and assembly 😞 Not sure if I even make sense (I'm dutch and it's always baffling how people are unable to read my thoughts)
  10. Good one, I was wondering about this too.
×
×
  • Create New...