GStreamer issueshttps://gitlab.freedesktop.org/groups/gstreamer/-/issues2023-07-04T15:54:29Zhttps://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/2762matroska-mux: Add GstVideoCodecAlpha support2023-07-04T15:54:29ZAlexandr Topilskimatroska-mux: Add GstVideoCodecAlpha supportWebM alpha mode is now supported for demuxing and decoding, the next logical step would be to support encoding and muxing. For this to happen, we'll need an encoder wrapper similiar to what is used for decoders. Inside the bin, we'll nee...WebM alpha mode is now supported for demuxing and decoding, the next logical step would be to support encoding and muxing. For this to happen, we'll need an encoder wrapper similiar to what is used for decoders. Inside the bin, we'll need, alphasplit, to separate the colors and the alpha planes. This implementation will likely need to behave differently depending on the encoder used. Then two encoder instances (unmodified) can be used and a codecalphamux element will be used to put the alpha channel in the color GstBuffer using GstVideoCodecAlphaMeta (and of course update the caps).
Optionally, encoder API can supports alpha internally can simply expose such support without wrapper.
With this in place, Matroska mixer will need to add the alpha mode tag, and attach the auxiliary alpha data.https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/923RTP transport-cc extension: simplify design2024-02-14T13:58:44ZMathieu DuponchelleRTP transport-cc extension: simplify designAt the moment, the design for usage of the transport-cc RTP header extension with our RTP elements is as follows:
* Application adds the extension to the payloader(s)
* payloader adds the extension bits
* the packets flow to rtpsession ...At the moment, the design for usage of the transport-cc RTP header extension with our RTP elements is as follows:
* Application adds the extension to the payloader(s)
* payloader adds the extension bits
* the packets flow to rtpsession through various elements, where rtpsession sets a TWCC seqnum on them based on two conditions:
- the extension is advertised in the caps
- the session finds the extension bits already present in the RTP packet
This all falls apart a bit when the scheme gets exposed to BUNDLE / UlpRed FEC as we use it within `webrtcbin`:
* For one, we bundle the streams with rtpfunnel, which applies some ultimately pointless cleverness by instantiating a twcc extension and setting n-streams on it, the logic in there is that we want to preserve whatever TWCC seqnums were set when n-streams=1, and create its own otherwise. In practice, the logic in the TWCC extension is similar to that of rtpsession, in that it will only set the bits on those packets that did have them, and rtpsession (with `rtptwcc`) discards those seqnums anyway. As I understand it, that logic can simply be nuked in rtpfunnel.
* Then there's the problem of FEC: when we use UlpRed, in between the payloader and the session we end up with elements that create protection packets / wrap media packets, these packets thus don't have the extension bits. I worked around this in https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1414 , but it is pretty ad hoc code, and just more places where we special case transport-cc, not too nice.
What I would propose is the following:
* Make it so rtpsession offers similar API to `rtpbasepayload` (in practice all we need here is an `add-extension` action signal)
* Let the TWCC extension offer API to specify what payload types must receive a TWCC seqnum (in theory it should be all of them, but firefox' implementation of transport-cc is braindead and adding the seqnums to audio packets leads to incorrect stats)
* Figure out some mechanism to tell `webrtcbin` to do just that, unclear how it'd look like but we can figure that out :)
Thoughts @hgr @ystreet @ocrete ?https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/175hlssink3: Support generating WebVTT HLS manifests2022-11-30T00:18:45ZRafael Caríciohlssink3: Support generating WebVTT HLS manifestsIt would be nice to extend the `hlssink3` element to add a new pad to accept `application/x-subtitle-vtt-fragmented` buffers and then be able to generate WebVTT HLS manifests. Eg.
```
#EXTM3U
#EXT-X-TARGETDURATION:6
#EXT-X-VERSION:3
#EX...It would be nice to extend the `hlssink3` element to add a new pad to accept `application/x-subtitle-vtt-fragmented` buffers and then be able to generate WebVTT HLS manifests. Eg.
```
#EXTM3U
#EXT-X-TARGETDURATION:6
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-PLAYLIST-TYPE:VOD
#EXTINF:6.00000,
fileSequence0.webvtt
#EXTINF:6.00000,
fileSequence1.webvtt
#EXTINF:6.00000,
fileSequence2.webvtt
#EXTINF:6.00000,
fileSequence3.webvtt
#EXTINF:6.00000,
fileSequence4.webvtt
#EXTINF:6.00000,
fileSequence5.webvtt
#EXTINF:6.00000,
fileSequence6.webvtt
#EXTINF:6.00000,
```
I imagine that adding this capability should enable the use of the `closedcaption/jsontovtt` element in combination with the `hlssink3` element. One question I have is, shall the `hlssink3` support one manifest type per instantiation, or should we be able to just send all caps (audio, video, and captions) to a single instance of `hlssink3`?https://gitlab.freedesktop.org/gstreamer/gstreamer-rs/-/issues/363Refactor message view API2023-02-08T08:12:43ZGuillaume DesmottesRefactor message view API`gst::message::StreamCollection<'a>` can't easily be stored because of the lifetime, making the API less convenient to use. Having `gst::message::StreamCollection<gst::Message>` instead would make things easier for users.
See [here](htt...`gst::message::StreamCollection<'a>` can't easily be stored because of the lifetime, making the API less convenient to use. Having `gst::message::StreamCollection<gst::Message>` instead would make things easier for users.
See [here](https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/586#note_1170554) and [here](https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/586#note_1170555) for context.https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/172tttocea608: Pop-on captions are always late2022-08-30T06:17:21ZMichael Farrelltttocea608: Pop-on captions are always lateIn 608 PopOn captions, one can only send a small amount of data with each frame. Pre-recorded content works within these limits by sending caption data in three stages:
1. **Before** the cue start time, send:
1. `resume caption load...In 608 PopOn captions, one can only send a small amount of data with each frame. Pre-recorded content works within these limits by sending caption data in three stages:
1. **Before** the cue start time, send:
1. `resume caption loading`
2. `erase non-display memory`
3. set text style
4. send text line
5. repeat steps 3 and 4 for each line
2. **At** the cue start time, send: `end of caption`
3. **After** the cue end time:
* if no caption should be displayed, send: `erase display memory`
* if there is another caption in non-display memory that should replace the current caption, send: `end of caption`
At present, using `tttocea608` generate pop-on captions results in cues that are always late. It is only when it gets to the cue time (PTS), that it starts getting data ready:
```
LOG tttocea608/imp.rs:820:<tttocea608-0> Handling Buffer { ptr: 0x7f5008029360, pts: 0:00:57.000000000, dts: 0:00:00.000000000, duration: 0:00:01.000000000, size: 120, offset: 18446744073709551615, offset_end: 18446744073709551615, flags: (empty), metas: [] }
LOG tttocea608/imp.rs:172:<tttocea608-0> 0:00:57.000000000 -> 0:00:57.033333333: cc 9420 (1420) '' '' (eia608_control_resume_caption_loading)
LOG tttocea608/imp.rs:172:<tttocea608-0> 0:00:57.033333333 -> 0:00:57.066666667: cc 94AE (142E) '' '' (eia608_control_erase_non_displayed_memory)
LOG tttocea608/imp.rs:628:<tttocea608-0> Processing Line { column: None, row: None, chunks: [Chunk { style: Green, underline: false, text: "57: Green!" }], carriage_return: None }
LOG tttocea608/imp.rs:172:<tttocea608-0> 0:00:57.066666667 -> 0:00:57.100000000: cc 94C2 (1442) '' '' (preamble: row: 13 col: 0 style: 1 chan: 0 underline: 0)
LOG tttocea608/imp.rs:172:<tttocea608-0> 0:00:57.100000000 -> 0:00:57.133333333: cc B537 (3537) '5' '7' (basicna)
LOG tttocea608/imp.rs:172:<tttocea608-0> 0:00:57.133333333 -> 0:00:57.166666667: cc BA20 (3A20) ':' ' ' (basicna)
LOG tttocea608/imp.rs:172:<tttocea608-0> 0:00:57.166666667 -> 0:00:57.200000000: cc C7F2 (4772) 'G' 'r' (basicna)
LOG tttocea608/imp.rs:172:<tttocea608-0> 0:00:57.200000000 -> 0:00:57.233333333: cc E5E5 (6565) 'e' 'e' (basicna)
LOG tttocea608/imp.rs:172:<tttocea608-0> 0:00:57.233333333 -> 0:00:57.266666667: cc 6EA1 (6E21) 'n' '!' (basicna)
LOG tttocea608/imp.rs:172:<tttocea608-0> 0:00:57.266666667 -> 0:00:57.300000000: cc 942F (142F) '' '' (eia608_control_end_of_caption)
```
Then `tttocea608` fills remaining caption duration with padding:
```
TRACE tttocea608/imp.rs:181:<tttocea608-0> 0:00:57.300000000 -> 0:00:57.333333333: padding
```
If there is a gap between captions, `tttocea608` doesn't send anything at all, and just sits around waiting.
`tttocea608` in PopOn mode should be smarter about how it places cues, such that it the next cue is loaded into memory **before** the cue's PTS, and only send a `end of caption` **at** the PTS.
It could also leave caption loading to the last possible moment, depending on the amount of `cc_data` bytes required to render a caption.
The current behaviour of `tttocea608` only really works well for roll-up captions (for live content).https://gitlab.freedesktop.org/gstreamer/orc/-/issues/38Add support for Windows ARM64 backend2021-09-28T14:28:50ZSeungha Yangseungha@centricular.comAdd support for Windows ARM64 backendRelated MR: https://gitlab.freedesktop.org/gstreamer/orc/-/merge_requests/60Related MR: https://gitlab.freedesktop.org/gstreamer/orc/-/merge_requests/60https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/2787isomp4: Add support for ffv12023-07-08T09:36:48ZSebastian Drögeisomp4: Add support for ffv1See https://github.com/FFmpeg/FFV1/blob/master/ffv1.md#iso-base-media-file-formatSee https://github.com/FFmpeg/FFV1/blob/master/ffv1.md#iso-base-media-file-formathttps://gitlab.freedesktop.org/gstreamer/gst-examples/-/issues/49rust/janus: Switch to a more exact JSON message data type2021-09-24T22:43:59ZSebastian Drögerust/janus: Switch to a more exact JSON message data typeCurrently all messages sent out are created via the `json!` macro and all received messages could make use of more enums instead of using `Option`s all over the place.
I've created the below as part of another project (untested so far)....Currently all messages sent out are created via the `json!` macro and all received messages could make use of more enums instead of using `Option`s all over the place.
I've created the below as part of another project (untested so far). Just putting this here for comment. CC @philn
It's not complete with all possible messages (and also only videoroom), and also does not contain all possible fields, but should be sufficient.
```rust
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Debug)]
#[serde(tag = "janus")]
#[serde(rename_all = "lowercase")]
pub enum Message {
Create {
transaction: String,
},
Destroy {
transaction: String,
session_id: u64,
},
Attach {
transaction: String,
session_id: u64,
plugin: String,
},
Detach {
transaction: String,
session_id: u64,
handle_id: u64,
},
Keepalive {
transaction: String,
session_id: u64,
#[serde(skip_serializing_if = "Option::is_none")]
handle_id: Option<u64>,
},
Trickle {
transaction: String,
session_id: u64,
#[serde(skip_serializing_if = "Option::is_none")]
handle_id: Option<u64>,
#[serde(flatten)]
candidates: IceCandidates,
},
Message {
transaction: String,
#[serde(skip_serializing_if = "Option::is_none")]
sender: Option<u64>,
session_id: u64,
handle_id: u64,
body: MessageBody,
#[serde(skip_serializing_if = "Option::is_none")]
jsep: Option<Jsep>,
},
Success {
transaction: String,
#[serde(skip_serializing_if = "Option::is_none")]
sender: Option<u64>,
#[serde(skip_serializing_if = "Option::is_none")]
session_id: Option<u64>,
data: Option<SuccessData>,
plugindata: Option<Plugindata>,
},
Error {
transaction: String,
error: ErrorMessage,
},
Event {
#[serde(skip_serializing_if = "Option::is_none")]
transaction: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
session_id: Option<u64>,
sender: u64,
plugindata: Plugindata,
#[serde(skip_serializing_if = "Option::is_none")]
jsep: Option<Jsep>,
},
Ack {
transaction: String,
#[serde(skip_serializing_if = "Option::is_none")]
session_id: Option<u64>,
#[serde(skip_serializing_if = "Option::is_none")]
sender: Option<u64>,
},
WebRTCUp {
session_id: Option<u64>,
sender: Option<u64>,
},
Media {
session_id: Option<u64>,
sender: Option<u64>,
#[serde(rename = "type")]
type_: String,
receiving: bool,
},
SlowLink {
session_id: Option<u64>,
sender: Option<u64>,
uplink: bool,
nacks: u64,
},
HangUp {
session_id: Option<u64>,
sender: Option<u64>,
reason: String,
},
}
#[derive(Serialize, Deserialize, Debug)]
#[serde(rename_all = "lowercase")]
pub struct ErrorMessage {
pub code: u32,
pub reason: String,
}
#[derive(Serialize, Deserialize, Debug)]
#[serde(rename_all = "lowercase")]
pub struct SuccessData {
pub id: u64,
}
#[derive(Serialize, Deserialize, Debug)]
#[serde(tag = "plugin")]
#[serde(rename_all = "lowercase")]
pub enum Plugindata {
#[serde(rename = "janus.plugin.videoroom")]
VideoRoom { data: VideoRoomData },
}
#[derive(Serialize, Deserialize, Debug)]
#[serde(tag = "type")]
#[serde(rename_all = "lowercase")]
pub enum Jsep {
Candidate(IceCandidates),
Offer {
sdp: String,
#[serde(skip_serializing_if = "Option::is_none")]
trickle: Option<bool>,
},
Answer {
sdp: String,
#[serde(skip_serializing_if = "Option::is_none")]
trickle: Option<bool>,
},
}
#[derive(Serialize, Deserialize, Debug)]
#[serde(untagged)]
#[serde(rename_all = "lowercase")]
pub enum IceCandidates {
Candidate { candidate: IceCandidate },
Candidates { candidates: Vec<IceCandidate> },
}
#[derive(Serialize, Deserialize, Debug)]
#[serde(untagged)]
#[serde(rename_all = "lowercase")]
pub enum IceCandidate {
Candidate {
candidate: String,
#[serde(rename = "sdpMLineIndex")]
sdp_mline_index: u32,
#[serde(skip_serializing_if = "Option::is_none")]
#[serde(rename = "sdpMid")]
sdp_mid: Option<String>,
},
Completed {
completed: bool,
},
}
#[derive(Serialize, Deserialize, Debug)]
#[serde(tag = "videoroom")]
#[serde(rename_all = "lowercase")]
pub enum VideoRoomData {
Created {
room: u64,
permanent: bool,
},
Destroyed {
room: u64,
},
Joined {
room: u64,
#[serde(skip_serializing_if = "Option::is_none")]
description: Option<String>,
id: u64,
},
Attached {
room: u64,
},
Success {
room: Option<u64>,
},
Event(Event),
}
#[derive(Serialize, Deserialize, Debug)]
#[serde(tag = "request")]
#[serde(rename_all = "lowercase")]
pub enum MessageBody {
Create {
#[serde(skip_serializing_if = "Option::is_none")]
description: Option<String>,
publishers: u32,
#[serde(skip_serializing_if = "Option::is_none")]
audiocodec: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
videocodec: Option<String>,
},
Destroy {
room: u64,
},
Join(Join),
Publish {
#[serde(skip_serializing_if = "Option::is_none")]
display: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
audiocodec: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
videocodec: Option<String>,
},
Unpublish,
Leave,
Start,
}
// FIXME: This is only for videoroom. How to distinguish the different plugins?
#[derive(Serialize, Deserialize, Debug)]
#[serde(untagged)]
#[serde(rename_all = "lowercase")]
pub enum Event {
Error {
error_code: u32,
error: String,
},
Joining {
room: u64,
joining: Joining,
},
Configured {
configured: String,
},
Unpublished {
#[serde(skip_serializing_if = "Option::is_none")]
room: Option<u64>,
unpublished: serde_json::Value,
},
Started {
started: String,
},
Leaving {
#[serde(skip_serializing_if = "Option::is_none")]
room: Option<u64>,
leaving: serde_json::Value,
},
Left {
left: String,
},
}
#[derive(Serialize, Deserialize, Debug)]
#[serde(tag = "ptype")]
#[serde(rename_all = "lowercase")]
pub enum Join {
Publisher {
room: u64,
#[serde(skip_serializing_if = "Option::is_none")]
id: Option<u64>,
#[serde(skip_serializing_if = "Option::is_none")]
display: Option<String>,
},
Subscriber {
room: u64,
feed: u64,
streams: Vec<SubscriberStream>,
},
}
#[derive(Serialize, Deserialize, Debug)]
#[serde(rename_all = "lowercase")]
pub struct SubscriberStream {
pub feed_id: u64,
}
#[derive(Serialize, Deserialize, Debug)]
#[serde(rename_all = "lowercase")]
pub struct Joining {
pub id: u64,
pub display: String,
}
```https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/1622tsdemux: Improve keyframe seeking2021-09-24T14:39:29ZEdward Herveytsdemux: Improve keyframe seekingCurrently keyframe seeking is done by:
1. Seeking to 2.5s before requested position
2. Guess-timating where the offset would be (based on PCR observations)
3. Collecting PES data (including for streams on which we don't do keyframe check...Currently keyframe seeking is done by:
1. Seeking to 2.5s before requested position
2. Guess-timating where the offset would be (based on PCR observations)
3. Collecting PES data (including for streams on which we don't do keyframe checking)
4. Figure out if a fully collected PES is a keyframe
5. If not seek back a bit, and go back to step 3
6. If it does contain a keyframe, but not SPS/PPS/... collect keyframe and go back to step 3
This could be improved by:
1. No longer offseting back the position if there is a keyframe handler for the video stream, but instead using the requested position.
2. in SEEKING state, create a dedicated Packetizer mode which allows scanning rapidly for given PIDs (the video one and the PCR one if it's not the same as the video one).
3. Scan for position
* Start from a guesstimated offset (PTS/DTS => PCR => offset) and search for those PID if PUSI bit is set or adaptation field is present. Ignore all other packets.
* If the stream is AVCHD/BDMV, every single packet has the PCR as the first 4 bytes. This can speed up guessing the initial start position to look for those given PID.
4. If the resulting offset is too far astray (or because of PCR jump/discont/gap), recalculate a new offset and go back to 3
5. Every time a video PID with PUSI bit is set, start scanning the content of that packet PES for actual PTS/DTS and keyframe/sps/pps presence. If scanner needs more data from that packet to decide whether it contains or not a keyframe/sps/pps, it can say so and the next packet from that video PID is provided
6. If that packet contains all needed information to start playback (SPS/PPS/keyframe), mark that as the starting position and go back to regular playback mode
7. If that packet doesn't contain the needed information, scan backwards for video PID with PUSI bit set, i.e. go back to step 5.
This will:
* reduce i/o to a minimum and ensure we always hit the *target* keyframe (and not some several seconds earlier)
* avoid messing up with the state of tsdemux regular playback (positions going completely astray, random stray data being pushed out, ...)
* allow much faster seeking, especially for AVCHD/BDMV contenthttps://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3060mediafoundation: add support dynamic bitrate control2023-10-23T09:52:41ZRoman Shpuntovmediafoundation: add support dynamic bitrate controlI use h264 encoder in UWP app. I can set 'bitrate' value only when pipeline is stopped. I want to control bitrate of video stream dynamically during playing state (encoder CBR mode). I tried to set 'bitrate' and 'max-bitrate' in 'rc-mode...I use h264 encoder in UWP app. I can set 'bitrate' value only when pipeline is stopped. I want to control bitrate of video stream dynamically during playing state (encoder CBR mode). I tried to set 'bitrate' and 'max-bitrate' in 'rc-mode' 0 and 1, but it is not changed. I use gstreamer 1.18.4 UWP. As far as I understand this is not supported by gstreamer now. This is feature request issue. Thanks!https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/156RFC: audio decoder plugin based on Symphonia2023-01-24T19:43:51ZFrançois LaignelRFC: audio decoder plugin based on SymphoniaI've been considering this for more than a year and since it gained exposure today, I guess it's a good time to ask: what would you think about an audio decoder plugin based on [Symphonia](https://github.com/pdeljanov/Symphonia)?I've been considering this for more than a year and since it gained exposure today, I guess it's a good time to ask: what would you think about an audio decoder plugin based on [Symphonia](https://github.com/pdeljanov/Symphonia)?https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/2801qtdemux: Use sidx to seek on fragmented mp42023-07-11T09:59:27ZJan Schmidtqtdemux: Use sidx to seek on fragmented mp4At the moment, qtdemux completely refuses to seek on fragmented mp4 when operating in push mode.
This makes sense if the only source of fragmented mp4 is DASH or so, because the adaptive demuxer can do the seeking upstream. However, if ...At the moment, qtdemux completely refuses to seek on fragmented mp4 when operating in push mode.
This makes sense if the only source of fragmented mp4 is DASH or so, because the adaptive demuxer can do the seeking upstream. However, if we're just playing a fragmented mp4 via vanilla mp4, qtdemux needs to handle the seeking.
If there's an sidx present in the file, we can use that to find the correct moof and do seeking that way. Right now, qtdemux only uses the sidx to calculate duration and then discards it.https://gitlab.freedesktop.org/gstreamer/gstreamer-rs/-/issues/343Extend the example for EncodeBin by setting a specific encoder / bitrate / fr...2021-06-07T15:34:54ZRico BeierExtend the example for EncodeBin by setting a specific encoder / bitrate / framerate...It might be a useful addition the example of EncodeBin to show how the use can set more specific options for the Audio/Video-Encoding-Profiles like:
- a specific encoder (nvenc for h264, ...)
- bitrate, framerate, codec-specific quality ...It might be a useful addition the example of EncodeBin to show how the use can set more specific options for the Audio/Video-Encoding-Profiles like:
- a specific encoder (nvenc for h264, ...)
- bitrate, framerate, codec-specific quality options.
I combined the example of GES and EncodeBin to render a demo video. But I'm totally lost on how to be more specific about the used encoder and its settings.https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/151fallbacksrc switching to last buffer instead of fallback_sink pad before restart2021-05-27T16:36:10ZMarkus Obermeierfallbacksrc switching to last buffer instead of fallback_sink pad before restartHi,
I am using the fallbacksrc plugin heavily in a scenario with 16 streams combined on a 4 x 4 mosaic.
- Due to network issues which I cannot avoid the buffers are not received within the 40 milliseconds timeout and an immediate fallb...Hi,
I am using the fallbacksrc plugin heavily in a scenario with 16 streams combined on a 4 x 4 mosaic.
- Due to network issues which I cannot avoid the buffers are not received within the 40 milliseconds timeout and an immediate fallback occurs, however, the timeout of 5 seconds is not reached. I have some kind of 'no signal' message that occurs therefore quite often and is really annoying when watching the mosaic.
- I would like to propose as an **enhancement** to allow during the timeout and before the scheduling of the restart happens to use instead of the fallback source the last buffer of the main source. This would show a freeze of the image but will continue once the buffer has been received. It's much more convenient.
- Since I am not very familiar with Rust programming language may I ask you to add this feature or outline how you would implement it and I will try myself?
In any case I would like to express my thanks for providing such great plugin. It saved a lot of effort since I was thinking about a similar but far less sophisticated thing before I found you've already implemented it so nicely.
Kind regards,
Markushttps://gitlab.freedesktop.org/gstreamer/gstreamer-rs/-/issues/329Improve `TagList::add()` API to not require a reference2021-04-25T11:53:38ZSebastian DrögeImprove `TagList::add()` API to not require a referenceThe following discussion from !746 should be addressed:
- [ ] @slomo started a [discussion](https://gitlab.freedesktop.org/gstreamer/gstreamer-rs/-/merge_requests/746#note_893002): (+5 comments)
> ```suggestion:-0+0
> pub ...The following discussion from !746 should be addressed:
- [ ] @slomo started a [discussion](https://gitlab.freedesktop.org/gstreamer/gstreamer-rs/-/merge_requests/746#note_893002): (+5 comments)
> ```suggestion:-0+0
> pub fn add<'a, T: Tag<'a>>(&mut self, value: T::TagType, mode: TagMergeMode) {
> ```
>
> Would be better (we could pass `"foo"` instead of `&"foo"`) but that has the problem that we can then only pass a `gst::Buffer` and not a `&gst::Buffer`. Unclear how to allow both here.https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/146rav1e: Update to latest encoder / configuration API2022-09-16T09:08:02ZSebastian Drögerav1e: Update to latest encoder / configuration APIAnd update properties accordingly.
There's also a channels based API that might fit better the usage pattern here.And update properties accordingly.
There's also a channels based API that might fit better the usage pattern here.https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/1530va: investigate a way to dismiss unused memories after reverse playback2021-09-24T14:39:08ZVíctor Manuel Jáquez Lealva: investigate a way to dismiss unused memories after reverse playbackWhile reverse playback the number of buffer has to increase since the whole GOP has to be allocated before rendering. This might bring a lot of memory pressure, mainly in hardware accelerated decoding.
For example, in order to make more...While reverse playback the number of buffer has to increase since the whole GOP has to be allocated before rendering. This might bring a lot of memory pressure, mainly in hardware accelerated decoding.
For example, in order to make more efficient the dmabuf-based memory allocation, va has an allocator's pool that stores all the created and released memories. But, after reverse playback (returning to normal playback) a lot of memories aren't required any more. It should be nice to find a mechanism to invalidate that memory cache.https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/141cdgparse: improve seeking2021-02-05T10:04:48ZGuillaume Desmottescdgparse: improve seeking`cdgparse` is marking `CDG_CMD_MEMORY_PRESET` frames as key frames, so seeking using `GST_SEEK_FLAG_KEY_UNIT` jump to one of those, preventing visual artifacts.
We could improve the non key-unit seeking by jumping backward to the neare...`cdgparse` is marking `CDG_CMD_MEMORY_PRESET` frames as key frames, so seeking using `GST_SEEK_FLAG_KEY_UNIT` jump to one of those, preventing visual artifacts.
We could improve the non key-unit seeking by jumping backward to the nearest keyframe and redecode the frames between the kf and the seeking point. Decoding cdg is really fast so that shouldn't be an issue.https://gitlab.freedesktop.org/gstreamer/gstreamer-rs/-/issues/309ci: couple of potential improvements2021-01-27T14:48:00ZGuillaume Desmottesci: couple of potential improvementsWhile [porting zbus's ci to fdo template](https://gitlab.freedesktop.org/zeenix/zbus/-/merge_requests/238) I experimented with a couple of changes:
- Installing Rust stable and nightly on the same image, using `rustup override set $x` i...While [porting zbus's ci to fdo template](https://gitlab.freedesktop.org/zeenix/zbus/-/merge_requests/238) I experimented with a couple of changes:
- Installing Rust stable and nightly on the same image, using `rustup override set $x` in each job to pick the version we want. This would help simplifying our ci setup as we'd have only one image instead of 3 (4 including the base one).
- Calling `cargo fetch` when generating the image so build deps are part of the image, saving each job to re-download them. Only worth if images are refreshed frequently enough that those deps stay relevant. Shouldn't be a problem if we regenerate the image each week for nightly I think.
We can wait a bit to see how it works for `zbus` and then consider doing the same for our ci here.https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/749dashsink: add webm support2021-09-29T09:32:14ZStéphane Cerveauscerveau@igalia.comdashsink: add webm supportThe dash sink should be able to generate a MPD with webm/vp8 content.
Here is documentation on how to generate this MPD/content with ffmpeg.
http://wiki.webmproject.org/adaptive-streaming/instructions-to-do-webm-live-streaming-via-dashThe dash sink should be able to generate a MPD with webm/vp8 content.
Here is documentation on how to generate this MPD/content with ffmpeg.
http://wiki.webmproject.org/adaptive-streaming/instructions-to-do-webm-live-streaming-via-dashStéphane Cerveauscerveau@igalia.comStéphane Cerveauscerveau@igalia.com