Search the Community
Showing results for tags 'gpu'.
-
I recently upgraded my GPU from a GTX960 to a GTX1660 SUPER. After I got the server up and running again I went into the true nas shell and ran NVIDIA-SMI to make sure the GPU is showing correctly and it is. However when I noticed I wasn't transcoding in Jellyfin I checked to make sure the GPU was still set to pass through, and it wasn't. I checked the box and hit update and then I get this error: error FAILED [EFAULT] Failed to render compose templates: Traceback (most recent call last): File "/usr/bin/apps_render_app", line 33, in sys.exit(load_entry_point('apps-validation==0.1', 'console_scripts', 'apps_render_app')()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/catalog_templating/scripts/render_compose.py", line 47, in main render_templates_from_path(args.path, args.values) File "/usr/lib/python3/dist-packages/catalog_templating/scripts/render_compose.py", line 19, in render_templates_from_path rendered_data = render_templates( ^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/catalog_templating/render.py", line 36, in render_templates ).render({'ix_lib': template_libs, 'values': test_values}) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 1301, in render self.environment.handle_exception() File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 936, in handle_exception raise rewrite_traceback_stack(source=source) File "/mnt/.ix-apps/app_configs/jellyfin/versions/1.1.21/templates/docker-compose.yaml", line 3, in top-level template code {% set c1 = tpl.add_container(values.consts.jellyfin_container_name, "image") %} ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/.ix-apps/app_configs/jellyfin/versions/1.1.21/templates/library/base_v2_1_16/render.py", line 59, in add_container container = Container(self, name, image) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/.ix-apps/app_configs/jellyfin/versions/1.1.21/templates/library/base_v2_1_16/container.py", line 94, in __init__ self.deploy: Deploy = Deploy(self._render_instance) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/.ix-apps/app_configs/jellyfin/versions/1.1.21/templates/library/base_v2_1_16/deploy.py", line 15, in __init__ self.resources: Resources = Resources(self._render_instance) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/.ix-apps/app_configs/jellyfin/versions/1.1.21/templates/library/base_v2_1_16/resources.py", line 24, in __init__ self._auto_add_gpus_from_values() File "/mnt/.ix-apps/app_configs/jellyfin/versions/1.1.21/templates/library/base_v2_1_16/resources.py", line 55, in _auto_add_gpus_from_values raise RenderError(f"Expected [uuid] to be set for GPU in slot [{pci}] in [nvidia_gpu_selection]") base_v2_1_16.error.RenderError: Expected [uuid] to be set for GPU in slot [0000:01:00.0] in [nvidia_gpu_selection] add_circle_outlineMore info... I've been googling for a few days and I can't find any solution that could help. I made sure the right GPU is showing up in the jellyfin/immich settings, I've rebooted and made sure the GPU isn't isolated in truenas. Anyone run into anything like this?
-
Has anyone some experience or tips and recommendation which GPU or similar to get for HW acceleration and/or machine learning? Immich for example has some machine learning stuff, which I've currently offloaded to my PC, however I'm looking to get a dedicated GPU which I can directly plug in the server. If anyone has some recommendations on what to get and some tips I would greatly appreciate it. I'm not planning on running my full blown ChatGPT replacing or sth. similar, just to use the machine learning in Immich and to be ready for future lightweight applications which might need some raw power.
-
Can HexOS help me setup a bit miner that has some halfway decent enterprise server hardware and GPUs?
-
- bit miner
- enterprise server
- (and 4 more)