Jump to content

Recommended Posts

Posted

Just wondering how easy/hard it is too time share a GPU between various applications.

If I have a single GPU, can it be shared between multiple application tasks. Like Immich ML, Plex transcoding, Jellyfin transcoding, an AI container, etc.

I know that you can pass through a GPU to a virtual machine. But not sure if this same mechanism is in play for containers. 

I guess my ultimate question is if multiple GPUs are required to do justice to multiple workloads across multiple apps. Or do we need to move towards a generic GPU container which the various apps can call upon for their various workloads.

Cheers

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...