gst-plugins-rs merge requestshttps://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests2024-03-28T03:42:54Zhttps://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1519Draft: closedcaption: add cea708overlay element2024-03-28T03:42:54ZMatthew Watersmatthew@centricular.comDraft: closedcaption: add cea708overlay elementCan render either a single CEA-708 service or a single CEA-608 channel.
Depends on:
- [ ] https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1406
- [ ] https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merg...Can render either a single CEA-708 service or a single CEA-608 channel.
Depends on:
- [ ] https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1406
- [ ] https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1517https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1518aws: transcriber: add support for language identification2024-03-28T09:32:29ZFrançois Laignelaws: transcriber: add support for language identificationThis commit adds support for [language identification] to the transcriber element and
makes use of the identified language in the translation pad.
Language identification is activated with either of the following properties
(which match...This commit adds support for [language identification] to the transcriber element and
makes use of the identified language in the translation pad.
Language identification is activated with either of the following properties
(which match the service API):
- 'identify-language' when a single language is expected in the stream.
- 'identify-multiple-languages' otherwise.
In both cases, the property 'language-options' must list the possible
languages. Ex.: "en-US,es-US,fr-FR".
The following pipeline identifies languages from a stream prossibly containing
multiple languages, outputs the transcription to the 'src' pad and translates
when needed to French ('translate_src_0') & English ('translate_src_1'):
```shell
gst-launch-1.0 -e uridecodebin uri=file:///__PATH_TO_FILE__ ! audioconvert
! awstranscriber name=t \
access-key="__TO_BE_DEFINED__" secret-access-key="__TO_BE_DEFINED__" \
identify-multiple-languages=true \
language-options="en-US,es-US,fr-FR" \
translate_src_0::language-code=fr \
translate_src_1::language-code=en \
t. ! fakesink dump=true \
t.translate_src_0 ! fakesink dump=true \
t.translate_src_1 ! fakesink dump=true
```
### Depends on https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1521
[language identification]: https://docs.aws.amazon.com/transcribe/latest/dg/lang-id-stream.htmlhttps://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1517Draft: closedcaption: Remove internal libcaption C code2024-03-27T08:54:58ZMatthew Watersmatthew@centricular.comDraft: closedcaption: Remove internal libcaption C codeReimplemented with rust versions.Reimplemented with rust versions.https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1516livesync: add support for variable framerate2024-03-22T16:38:14ZMichael Tretterm.tretter@pengutronix.delivesync: add support for variable framerateA variable framerate is signaled by a framerate of 0/1 and the
max-framerate field. This may be the result of source that can produce
up to max-framerate frames, but may produce less. A live stream may
expect frames with the expected max...A variable framerate is signaled by a framerate of 0/1 and the
max-framerate field. This may be the result of source that can produce
up to max-framerate frames, but may produce less. A live stream may
expect frames with the expected max-framerate and updated timestamps
even if the frame contents didn't change.
Support streams with variable framerate in livesync and send frames with
max-framerate to downstream.https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1515Draft: net/hlssink3: Add hlssink42024-03-28T06:21:55ZSanchayan Maitysanchayan@sanchayanmaity.netDraft: net/hlssink3: Add hlssink4`hlssink4` adds support for the following as per RFC 8216
- Multivariant/master playlist
- Alternate Renditions
- Variant Streams
Closes #489.
### TODO:
- Support for closed captions
- Support for WebVTT subtitles
- Support for S3?
...`hlssink4` adds support for the following as per RFC 8216
- Multivariant/master playlist
- Alternate Renditions
- Variant Streams
Closes #489.
### TODO:
- Support for closed captions
- Support for WebVTT subtitles
- Support for S3?
### Motivation
HTTP Live Streaming specification in [RFC 8216](https://www.rfc-editor.org/rfc/rfc8216) defines alternate renditions and variant streams.
Existing HLS sink elements viz. `hlssink2`, `hlssink3` and `s3hlsssink` do not have support for alternate renditions and variant streams as RFC 8216. `hlssink4` is written with the intent of supporting alternate rendition and variant streams.
If alternate renditions or variant streams are not required, use of `hlssink3` or `hlssink2` is recommended instead.
### Reducing the problem space
To simplify the implementation, `hlssink4` purposely opts to limit itself with respect to the following two points.
- Handling of media streams
`hlssink4` does not do any handling of the media streams. The user is expected to provide `hlssink4` with the required media streams as input.
For example, consider the multiple video variant case, where the same audio is expected to be muxed with all the three video variants. The onus of tee’ing the audio and providing the same audio stream to be muxed with each of the three video variants stream is on the user.
- Muxing of audio and video
Except for the multi video variant when using MPEG-TS case, where the same audio is muxed with all the video variants, audio and video are non-muxed. Muxed audio and video is not supported with alternate renditions. For example, as in section 8.7 of RFC 8216, where all video renditions are required to contain the audio is not supported.
### Design Considerations
Alternate rendition and variant stream parameters to be taken as input from the user.
Two approaches for this were considered.
1. Property settings on the element
Alternate rendition and variant stream is provided via the property setting on the element. The element will set up independent `hlssink3` for each rendition or variant stream requested and emit `pad-added` signal for pads added, to be used by the application to provide the streams as input for the respective renditions and variant streams. With the `hlssink3` connected downstream for each rendition or variant stream requested, each stream gets its own playlist.
2. Pad property settings
Alternate rendition and variant stream are property settings on the pad. For each audio or video pad requested, the user sets either the alternate rendition or variant stream property on the pad. Each requested pad is connected to a `hlssink3` downstream with each rendition or variant stream, thus getting its own playlist.
### Implementation
`hlssink4` opts for the second approach above, where an audio or video pad has alternate rendition or variant stream as pad settings. Alternate rendition or variant stream inputs provided via these pad settings are used in the construction of multivariant/master playlist.
Whether CMAF or MPEG-TS is used, is selected by a muxer setting on the element. This in turn controls whether `hlscmafsink` or `hlssink3` is used for generating the media playlist and segments.
For generating media playlist, a `hlssink3` or `hlscmafsink` is connected downstream for each of the audio or video stream requested as part of an alternate rendition group or a variant stream.
`CODECS` string, required for variant stream and thus for building multivariant playlist, has to be manually provided for the case of using MPEG-TS. For the CMAF case, the implementation takes care of generating the `CODECS` string. If the `CODECS` field is not set when providing a variant stream input, the video codec string will not contain the profile level information, which can result in media playback failures in the case of browsers. This is because for the MPEG-TS case, H264/H265 will use the byte-stream, stream format which does not have `codec_data`. In the absence of `codec_data`, one has to resort to parsing the byte-stream for getting the relevant profile level information, which is currently not implemented.
### Testing
See the test code in `net/hlssink3/tests/hlssink4.rs` for examples.
The master playlist generated can be tested as follows. In the directory where the master playlist is written, run a simple HTTP server using Python.
```bash
python3 -m http.server
```
#### GStreamer
```bash
gst-play-1.0 http://localhost:8000/master.m3u8
```
#### ffmpeg
```bash
ffplay http://localhost:8000/master.m3u8
```
#### video.js
To test with [video.js](https://videojs.com/), add the below HTML in an `index.html` file in the same location as the master playlist. Open `localhost:8000` in the browser to test.
```html
<head>
<link href="https://vjs.zencdn.net/8.10.0/video-js.css" rel="stylesheet" />
<!-- If you'd like to support IE8 (for Video.js versions prior to v7) -->
<!-- <script src="https://vjs.zencdn.net/ie8/1.1.2/videojs-ie8.min.js"></script> -->
</head>
<body>
<video
id="my-video"
class="video-js"
controls
preload="auto"
width="640"
height="480"
data-setup="{}"
>
<source src="http://localhost:8000/master.m3u8" type="application/x-mpegURL" />
<p class="vjs-no-js">
To view this video please enable JavaScript, and consider upgrading to a
web browser that
<a href="https://videojs.com/html5-video-support/" target="_blank"
>supports HTML5 video</a
>
</p>
</video>
<script src="https://vjs.zencdn.net/8.10.0/video.min.js"></script>
</body>
```
This example has been taken from [here](https://videojs.com/getting-started).https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1514Draft: dav1d: Set colorimetry parameters on src pad caps2024-03-26T14:34:56ZPhilippe NormandDraft: dav1d: Set colorimetry parameters on src pad capshttps://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1506tracers: Add a pad push durations tracer2024-03-20T16:08:05ZSebastian Drögetracers: Add a pad push durations tracerThis tracer measures the time it takes for a buffer/buffer list push to return.This tracer measures the time it takes for a buffer/buffer list push to return.https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1505webrtc: janus: add 'janus-state' property to the signaller2024-03-25T10:30:13ZGuillaume Desmotteswebrtc: janus: add 'janus-state' property to the signallerThis property can be used by applications to track the state of the
signaller, especially to know when the stream is up.
Fix #510
Closes #510This property can be used by applications to track the state of the
signaller, especially to know when the stream is up.
Fix #510
Closes #510https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1501webrtc: add raw payload support2024-03-25T19:37:07ZFrançois Laignelwebrtc: add raw payload support## Depends on
Only the last commit is specific to this MR, the other commits implement other features which serve as a basis for this work and come from the following MRs:
* https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/mer...## Depends on
Only the last commit is specific to this MR, the other commits implement other features which serve as a basis for this work and come from the following MRs:
* https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1463: this is the base infrastructure for the precise example MR stack. It aims at providing the same features & convenience as the RTSP based [`rtp-rapid-sync-example`](https://gitlab.freedesktop.org/slomo/rtp-rapid-sync-example/) for WebRTC. It helped improve the WebRTC C stack with support for intra CNAME synchronization & RFC 6051 in-band NTP-64 timestamps for rapid synchronization. The example also allows spawning an arbitrary number of audio and/or video streams.
* https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1500: adds RFC 7273 clock signalling to `webrtcsink`.
## Raw payload support
This MR adds support for raw payloads such as L24 audio to `webrtcsink` &
`webrtcsrc`.
Most changes take place within the `Codec` helper structure:
* A `Codec` can now advertise a depayloader. This also ensures that a format
not only can be decoded when necessary, but it can also be depayloaded in the
first place.
* It is possible to declare raw `Codec`s, meaning that their caps are compatible
with a payloader and a depayloader without the need for an encoder and decoder.
* Previous accessor `has_decoder` was renamed as `can_be_received` to account
for codecs which can be handled by an available depayloader with or without
the need for a decoder.
* New codecs were added for the following formats:
* L24, L16, L8 audio.
* RAW video.
## Open question
`webrtcsink` now proposes the raw codecs as part of its `audio-caps` &
`video-caps`, after the encoder based codecs. E.g.:
```
audio/x-opus
audio/x-raw
format: S24BE
layout: interleaved
audio/x-raw
format: S16BE
layout: interleaved
audio/x-raw
format: U8
layout: interleave
```
The same was done for `webrtcsrc` with the `audio-codecs` & `video-codecs`.
E.g.:
```
Default: "< (string)OPUS, (string)L24, (string)L16, (string)L8 >"
```
We could keep current defaults (e.g. 'audio/x-opus') and list the possible
additional variants in the property's blurb, similarly to what `webrtcsrc` does:
```
audio-codecs: [...] Valid values: [OPUS, L24, L16, L8]
```
The `webrtc-precise-sync` examples were updated to demonstrate streaming of raw
audio or video.https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1500webrtc: add RFC 7273 support2024-03-26T11:44:30ZFrançois Laignelwebrtc: add RFC 7273 support## Depends on:
Only the last commit is specific to this MR, the other commit implements other features which serve as a basis for this work and comes from the following MR:
* https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/me...## Depends on:
Only the last commit is specific to this MR, the other commit implements other features which serve as a basis for this work and comes from the following MR:
* https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1463: this is the base infrastructure for the precise example MR stack. It aims at providing the same features & convenience as the RTSP based [`rtp-rapid-sync-example`](https://gitlab.freedesktop.org/slomo/rtp-rapid-sync-example/) for WebRTC. It helped improve the WebRTC C stack with support for intra CNAME synchronization & RFC 6051 in-band NTP-64 timestamps for rapid synchronization. The example also allows spawning an arbitrary number of audio and/or video streams.
This also depends on the following merged MRs:
* https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1509
* https://gitlab.freedesktop.org/gstreamer/gstreamer-rs/-/merge_requests/1406
* https://gitlab.freedesktop.org/gstreamer/gstreamer-rs/-/merge_requests/1405
## RFC 7273 clock signalling & synchronization
This commit implements [RFC 7273] (NTP & PTP clock signalling & synchronization)
for `webrtcsink` by adding the "ts-refclk" & "mediaclk" SDP media attributes to
identify the clock. These attributes are handled by `rtpjitterbuffer` on the
consumer side. They MUST be part of the SDP offer.
When used with an NTP or PTP clock, "mediaclk" indicates the RTP offset at the
clock's origin. Because the payloaders are not instantiated when the offer is
sent to the consumer, the RTP offset is set to 0 and the payloader
`timstamp-offset`s are set accordingly when they are created.
The `webrtc-precise-sync` examples were updated to be able to start with an NTP
(default), a PTP or the system clock (on the receiver only). The rtp jitter
buffer will synchronize with the clock signalled in the SDP offer provided the
sender is started with `--do-clock-signalling` & the receiver with
`--expect-clock-signalling`.
[RFC 7273]: https://datatracker.ietf.org/doc/html/rfc7273
FIXME:
- [x] With a PTP clock, the reference timestamp stays at 00:00:00.000 on the receiver overlay, while the reference timestamp meta is detected by the pad probe. `timeoverlay` needs to be provided with the meta's reference via the property `reference-timestamp-caps`.
- [x] When the receiver is configured to start with a PTP clock and clock signalling is enabled and the signalled clock is a PTP clock, an intermittent assertion failure in H264 parsers can occur. Doesn't occur when `rtp-latency` is increased. `gst_h264_parse_process_backlog: assertion failed: (h264parse->nal_backlog->len > 0)`. Fixed by discarding retransmission on the receiver. See comment in code for references to related known issues.https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1495Draft: meson: Make any Meson-provided dependencies available to cargo2024-03-14T09:40:35ZamysparkDraft: meson: Make any Meson-provided dependencies available to cargoHi all,
This MR is to fix an issue where Cargo would be blissfully unaware of any deps provided by Meson. They expect downstream consumers that need those deps externally [to set `PKG_CONFIG_PATH` to the relevant `meson-uninstalled` bui...Hi all,
This MR is to fix an issue where Cargo would be blissfully unaware of any deps provided by Meson. They expect downstream consumers that need those deps externally [to set `PKG_CONFIG_PATH` to the relevant `meson-uninstalled` build folders](https://mesonbuild.com/Release-notes-for-0-54-0.html#uninstalled-pkgconfig-files).
@cole.richardson12, can you test this works for you?
Fixes #512amysparkamysparkhttps://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1492uriplaylistbin: allow to change 'iterations' property while playing2024-03-11T12:26:18ZGuillaume Desmottesuriplaylistbin: allow to change 'iterations' property while playinghttps://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1488rtp: Add linear audio (L8, L16, L20, L24) RTP payloaders / depayloaders2024-03-20T20:42:20ZTim-Philipp Müllertim@centricular.comrtp: Add linear audio (L8, L16, L20, L24) RTP payloaders / depayloadersDepayloaders even set durations on the output buffer (unlike legacy C one).
L16 depayloader handles pt 10/11 correctly if no input clock-rate or channel number is set (unlike legacy one).
Channel reordering implementation should be mor...Depayloaders even set durations on the output buffer (unlike legacy C one).
L16 depayloader handles pt 10/11 correctly if no input clock-rate or channel number is set (unlike legacy one).
Channel reordering implementation should be more efficient than legacy C one.
Also added an L20 depayloader, because we can.
## Pipelines
### L8 Sender (6ch)
```shell
gst-launch-1.0 audiotestsrc samplesperbuffer=480 ! audio/x-raw,channels=6 ! rtpL8pay2 ! udpsink host=127.0.0.1
```
### L8 Receiver (6ch)
```shell
GST_DEBUG=*depay:6 gst-launch-1.0 udpsrc address=127.0.0.1 caps=application/x-rtp,encoding-name=L8,media=audio,clock-rate=48000,channels=6,channel-order=DV.LRLsRsCS ! rtpL8depay2 ! fakesink silent=false
```https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1487rtp: Add VP8/9 RTP payloader/depayloader2024-03-11T15:13:30ZSebastian Drögertp: Add VP8/9 RTP payloader/depayloaderhttps://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1473Draft: net/webrtc/janusvr: add new source element2024-03-25T07:06:19ZEva PaceDraft: net/webrtc/janusvr: add new source elementThis Merge Request is not complete, but covers mostly what it's needed to make the Janus Video Room signaller support a source element too.
Example of usage:
```bash
# it may not work as I've experienced a few bugs myself
$ gst-launch-...This Merge Request is not complete, but covers mostly what it's needed to make the Janus Video Room signaller support a source element too.
Example of usage:
```bash
# it may not work as I've experienced a few bugs myself
$ gst-launch-1.0 janusvrwebrtcsrc signaller::room-id=1234 signaller::display-name=banana webrtcsrc::uri=ws://127.0.0.1:8188 ! videoconvert !
autovideosink
```
What changed:
- New `role` property for each plugin, producer (`sink`) and consumer (`src`);
- `handle_reply` became `handle_reply_producer`, and the new one handles the consumer path;
- `janus-endpoint` property became `uri`;
- `handle_id` got split in two. One for the publisher (`sink`), and the other for the consumer (`src`);
- The `sink` uses only the `producer_handle`;
- The `src` uses both, one to subscribe to SDPs in the room, the other to join the room.https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1464Draft: Add elements to convert relation metas to/from ONVIF metas2024-03-19T18:37:44ZBenjamin GaignardDraft: Add elements to convert relation metas to/from ONVIF metasCreate elements to convert relation metas to/from ONVIF metas.
The goal is to be able to send and receive relation metas thought RTP by using ONVIF metas.
A third element separated ONVIF from buffer to be able to send them via RTP.
Pipel...Create elements to convert relation metas to/from ONVIF metas.
The goal is to be able to send and receive relation metas thought RTP by using ONVIF metas.
A third element separated ONVIF from buffer to be able to send them via RTP.
Pipelines could look like:
relation meta src -> relationmeta2onvifmeta -> onvifmetadataseparator -> RTP src
RTP sink -> onvifmetadatacombiner -> onvifmeta2relationmeta > relation meta sink
This series depends on https://gitlab.freedesktop.org/gstreamer/gstreamer-rs/-/merge_requests/1391https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1463webrtc: add precise synchronization example2024-03-28T12:54:05ZFrançois Laignelwebrtc: add precise synchronization exampleThis example demonstrates a sender / receiver setup which ensures precise synchronization of multiple streams in a single session.
[RFC 6051]-style rapid synchronization of RTP streams is available as an option. See the [Instantaneous R...This example demonstrates a sender / receiver setup which ensures precise synchronization of multiple streams in a single session.
[RFC 6051]-style rapid synchronization of RTP streams is available as an option. See the [Instantaneous RTP synchronization...] blog post for details about this mode and an example based on RTSP instead of WebRTC.
[RFC 6051]: https://datatracker.ietf.org/doc/html/rfc6051
[Instantaneous RTP synchronization...]: https://coaxion.net/blog/2022/05/instantaneous-rtp-synchronization-retrieval-of-absolute-sender-clock-times-with-gstreamer/
Depends on:
* https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1511
* https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1502
* https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1462
* https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6119
* https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6109
* https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6110
* https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6116https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1450Draft: webrtcsink: Add video simulcast support to LiveKit sink2024-02-29T16:28:35ZJordan YellozDraft: webrtcsink: Add video simulcast support to LiveKit sinkThis change adds video simulcast support to webrtcsink as it's supported by
LiveKit but it shouldn't be too much different for other servers that support
simulcasts.
The overall idea is that the sender application supplies multiple vide...This change adds video simulcast support to webrtcsink as it's supported by
LiveKit but it shouldn't be too much different for other servers that support
simulcasts.
The overall idea is that the sender application supplies multiple video streams
at different qualities that contribute to one logical video track. The LiveKit
server (SFU) will decide which quality to send to each receiving client based
on a combination of the client's preferences and network conditions.
This implementation currently only creates simulcasts in configurations where
the sink creates the SDP offer.
Also, there is a more advanced version of WebRTC simulcasts with scalable video
codecs where higher quality components of a track may depend on its lower
quality components which this implementation doesn't support.
What's needed in this case are the following:
- A mechanism to group video streams together as one track which is the MID
header extension
- A mechanism to tag each stream within a track which is the RID header
extension
- A mechanism to assign a role/priority to the streams within a track which is
the `a=simulcast` media attribute.
Based on what I've tested, webrtcbin has enough support for simulcasts to make
this work as long as each component of a simulcast within a single media
section has its own SSRC.
The changes necessary to the webrtcsink are:
- Allow MID/RID attributes to be set on each sinkpad
- Create a tree of MID=>RIDs based on the element's sinkpads
- For each simulcast in the tree, the contributing video streams will share a
single webrtcbin sinkpad through a rtpfunnel
- Assign MID and RID header extensions to the payloader of each sinkpad that's
part of a simulcast
- For each simulcast, assign a list of caps to each rtpfunnel with the
appropriate `a-mid`, `rid-*`, and `a-simulcast` attributes by combining the
caps of its members
- For each video stream that's a member of a simulcast, also add the standard
`max-width` and `max-height` video dimension restrictions. This is not
absolutely necessary in general but some servers might prefer that this
transmitted in the SDP. More importantly for LiveKit purposes, it allows a kind
of informal interface between the sink element and the signalling client.
The changes necessary for the LiveKit signalling client are:
- Add a basic parser for SDP media restrictions and build LiveKit `VideoLayer`s
using the video dimension restrictions specified for each RID.
Also, LiveKit requires that users follow the convention of using the following RID values within a simulcast:
- "q" for the low quality ("quarter") stream
- "h" for the medium quality ("half") stream
- "f" for the high quality ("full") stream
The sink element doesn't enforce that but the signalling client will only build video layers from those RIDs.
### TODO:
- [ ] Consider modifying design so that these additions are only applied to the
livekitwebrtcsink subclass to avoid adding unnecessary complexity to the other
sinks.
- [x] Properly incorporate the RTP header extension registration that was added
recently.
- [x] Add some documentation on how to create a simulcast with LiveKithttps://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1434(f)mp4mux: Add support for audio priming and GstAggregator::start-time2024-02-06T07:25:22ZThibault Sauniertsaunier@igalia.com(f)mp4mux: Add support for audio priming and GstAggregator::start-time- **fmp4mux: Add support to write edts to handle audio priming**
- **fmp4mux: Add support for GstAggregator::start-time-selector==set**
Taking it into account so the encoded stream start time is what was set
in `GstAggregator::start-...- **fmp4mux: Add support to write edts to handle audio priming**
- **fmp4mux: Add support for GstAggregator::start-time-selector==set**
Taking it into account so the encoded stream start time is what was set
in `GstAggregator::start-time`, respecting what was specified by the user.
- **mp4mux: Add support for edit lists**
So we properly handle audio priming
- **mp4mux: Add support for GstAggregator::start-time-selector==set**
Taking it into account so the encoded stream start time is what was set
in `GstAggregator::start-time`, respecting what was specified by the user.https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1426Draft: rtp: new rtpbin2 element2024-03-14T12:33:47ZMatthew Watersmatthew@centricular.comDraft: rtp: new rtpbin2 elementSee individual commits.
Includes:
- RTP/AVP profile
- RTP/AVPF profile
- PLI/FIR handling
- Reduced size RTCP
- Jitterbuffer
- CNAME synchronisation
TODO:
- [x] Handle SSRC/PT mapping better and also externally.
Some changes from th...See individual commits.
Includes:
- RTP/AVP profile
- RTP/AVPF profile
- PLI/FIR handling
- Reduced size RTCP
- Jitterbuffer
- CNAME synchronisation
TODO:
- [x] Handle SSRC/PT mapping better and also externally.
Some changes from the existing rtpbin:
1. Multiple RTP sessions RTCP handling use the same thread/s and thus less threads overall in multi-session use cases.
2. Cleaner RTCP scheduling code
3. CNAME synchronsation happens in a single place instead of being split between jitterbuffer and rtpbin
4. send and receive halves have been split in order to facilitate SFU/MCU/etc use cases without wormhole elements.