Jump to content

Recommended Posts

Posted

Just wondering how easy/hard it is too time share a GPU between various applications.

If I have a single GPU, can it be shared between multiple application tasks. Like Immich ML, Plex transcoding, Jellyfin transcoding, an AI container, etc.

I know that you can pass through a GPU to a virtual machine. But not sure if this same mechanism is in play for containers. 

I guess my ultimate question is if multiple GPUs are required to do justice to multiple workloads across multiple apps. Or do we need to move towards a generic GPU container which the various apps can call upon for their various workloads.

Cheers

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...