Skip to content

nvcodec: Add CUDA specific memory and bufferpool

nvcodec: Peer direct access support

If support direct access each other, use device to device memory copy
without staging host memory
cudacontext: Enable direct CUDA memory access over multiple GPUs

If each device context can access each other, enable peer access
for better interoperability.
nvenc: Support CUDA buffer pool

When upstream support CUDA memory (only nvdec for now), we will create
CUDA buffer pool.
nvdec: Support CUDA buffer pool

If downstream can accept CUDA memory caps feature (currently nvenc only),
always CUDA memory is preferred.
nvcodec: Add CUDA specific memory and bufferpool

Introducing CUDA buffer pool with generic CUDA memory support.
Likewise GL memory, any elements which are able to access CUDA device
memory directly can map this CUDA memory without upload/download
overhead via the "GST_MAP_CUDA" map flag.
Also usual GstMemory access also possible with internal staging memory.

For staging, CUDA Host allocated memory is used (see CuMemAllocHost API).
The memory is allowing system access but has lower overhead
during GPU upload/download than normal system memory.
Edited by Seungha Yang

Merge request reports