Add Multi-Frame Context support for Parallel Encode use cases
Submitted by Sreerenj Balachandran
VA-API is going to bring some new apis to support Multi-Frame processing.
Currently, Multi-frame processing is an optimization for making better hardware resource utilization in multi-stream encoding/transcoding applications.
This allows combining several jobs from different parallel pipelines inside one GPU task execution to better reuse engines that can't be loaded fully in single frame case, as well as decrease CPU overhead for task submission.
Here is the Pull Request landed in libva:https://github.com/01org/libva/pull/112
I am not sure how we could implement this in gstreamer.
These new APIs bringing the idea of a having a single "super-encoder" element which can accept multiple input streams and encode them to different formats
like one h264 and the other to hevc or multiple h264 encoded streams.
From VA-API perspective there could be 3-4 new APIs
vaCreateMFContext(VADisplay dpy, VAMFContextID *mf_context) : Create a new MultiFrameContext
vaMFAddContext (VADisplay dpy, VAMFContextID mf_context, VAContextID context): To add an already created VAContext to the MFContext
vaMFReleaseContext (VADisplay dpy, VAMFContextID mf_context, VAContextID context): To release the vacontext from global context
vaMFSubmit (VADisplay dpy, VAMFContextID mf_context, VAContextID * contexts, int num_contexts): Make the end of rendering for a pictures in contexts passed with submission
vaQueryXXX: needed for checking the feature support for specific entrypoint combinations. This not not included in current PullRequest though.
Fron Intel-vaapi-driver perspective,
If a context is attached to MultiFrame Context, underlined driver won't begin the actual batchbuffer submission in vaEndPicture(), instead
it waits for the vaMFSubmit from middleware to start actual processing.