Skip to content
Snippets Groups Projects

nvcodec: Add CUDA specific memory and bufferpool

Closed Seungha Yang requested to merge seungha.yang/gst-plugins-bad:nvcodec-buffer-pool into master
nvcodec: Peer direct access support

If support direct access each other, use device to device memory copy
without staging host memory
cudacontext: Enable direct CUDA memory access over multiple GPUs

If each device context can access each other, enable peer access
for better interoperability.
nvenc: Support CUDA buffer pool

When upstream support CUDA memory (only nvdec for now), we will create
CUDA buffer pool.
nvdec: Support CUDA buffer pool

If downstream can accept CUDA memory caps feature (currently nvenc only),
always CUDA memory is preferred.
nvcodec: Add CUDA specific memory and bufferpool

Introducing CUDA buffer pool with generic CUDA memory support.
Likewise GL memory, any elements which are able to access CUDA device
memory directly can map this CUDA memory without upload/download
overhead via the "GST_MAP_CUDA" map flag.
Also usual GstMemory access also possible with internal staging memory.

For staging, CUDA Host allocated memory is used (see CuMemAllocHost API).
The memory is allowing system access but has lower overhead
during GPU upload/download than normal system memory.
Edited by Seungha Yang

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • Seungha Yang added 22 commits

    added 22 commits

    • c1760893...ef16d755 - 17 commits from branch gstreamer:master
    • 10c86454 - nvcodec: Add CUDA specific memory and bufferpool
    • 9ba2860f - nvdec: Support CUDA buffer pool
    • c1524292 - nvenc: Support CUDA buffer pool
    • e3aac74b - cudacontext: Enable direct CUDA memory access over multiple GPUs
    • 2069f1ab - nvcodec: Peer direct access support

    Compare with previous version

  • Seungha Yang resolved all threads

    resolved all threads

  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Please register or sign in to reply
    Loading