It seems you've been doing the right thing (as the NVIDIA GPU is selected appropriately) but rather Vulkan defaults to the NVIDIA dGPU for unknown [at least to me and the issue's author] reasons. I added some information below on a workaround (that doesn't require modification of this project) I've been using.
At a basic level, adding support for additional environment variables to be specified would be helpful. For example, I had the following added to my copy of the prime-run
script:
GST_PLUGIN_FEATURE_RANK="vaapijpegdec:PRIMARY+2,vaapimpeg2dec:PRIMARY+2,vaapih264dec:PRIMARY+2,vaapivc1dec:PRIMARY+2,vaapivp8dec:PRIMARY+2,vaapivp9dec:PRIMARY+2,vaapih265dec:PRIMARY+2,vaapioverlay:PRIMARY+2,vaapipostproc:PRIMARY+2,vaapidecodebin:PRIMARY+2,vaapisink:PRIMARY+2,vaapimpeg2enc:PRIMARY+2,vaapih264enc:PRIMARY+2,vaapijpegenc:PRIMARY+2,vaapivp8enc:PRIMARY+2,vaapih265enc:PRIMARY+2,nvmpegvideodec:MAX,nvmpeg2videodec:MAX,nvmpeg4videodec:MAX,nvh264sldec:MAX,nvh264dec:MAX,nvjpegdec:MAX,nvh265sldec:MAX,nvh265dec:MAX,nvvp8dec:MAX,nvvp9dec:MAX,nvh264enc:MAX,nvh265enc:MAX"
LIBVA_DRIVER_NAME="nvidia"
VDPAU_DRIVER="nvidia"
This allows me to configure hardware {en,de}coding to utilize the NVIDIA GPU rather than the Intel iGPU whenever I ran a program on it.
For the record, I have a workaround set up where __NV_PRIME_RENDER_OFFLOAD=1
and __VK_LAYER_NV_optimus=non_NVIDIA_only
are set in /etc/environment.d/nvidia.conf
.
EDIT: vkcube
will still use the NVIDIA GPU for whatever reason, but you'll notice it isn't GPU 0 anymore (whereas your iGPU will be). I think? this is behaviour specific to vkcube
. You can use vkcube --gpu_number 0
to get it to use "GPU 0" to workaround that.