Commit da6afdec authored by Mathieu Duponchelle's avatar Mathieu Duponchelle 🐸

doc: remove xml from comments

parent 43eaf5ac
Pipeline #39458 passed with stages
in 41 minutes and 20 seconds
......@@ -21,12 +21,11 @@
*
* AV1 Decoder.
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 -v filesrc location=videotestsrc.webm ! matroskademux ! av1dec ! videoconvert ! videoscale ! autovideosink
* ]|
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -21,12 +21,11 @@
*
* AV1 Encoder.
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 videotestsrc num-buffers=50 ! av1enc ! webmmux ! filesink location=av1.webm
* ]|
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -178,10 +178,11 @@ extern "C" {
* "625-line television Wide Screen Signalling (WSS)"</a>.
*
* vbi_sliced payload:
* <pre>
* ```
* Byte 0 1
* msb lsb msb lsb
* bit 7 6 5 4 3 2 1 0 x x 13 12 11 10 9 8<br></pre>
* bit 7 6 5 4 3 2 1 0 x x 13 12 11 10 9 8
* ```
* according to EN 300 294, Table 1, lsb first transmitted.
*/
#define VBI_SLICED_WSS_625 0x00000400
......@@ -280,11 +281,11 @@ extern "C" {
* Reference: <a href="http://www.jeita.or.jp">EIA-J CPR-1204</a>
*
* vbi_sliced payload:
* <pre>
* ```
* Byte 0 1 2
* msb lsb msb lsb msb lsb
* bit 7 6 5 4 3 2 1 0 15 14 13 12 11 10 9 8 x x x x 19 18 17 16
* </pre>
* ```
*/
#define VBI_SLICED_WSS_CPR1204 0x00000800
......
......@@ -28,15 +28,14 @@
* frames using the given ICC (International Color Consortium) profiles.
* Falls back to internal sRGB profile if no ICC file is specified in property.
*
* <refsect2>
* <title>Example launch line</title>
* <para>(write everything in one line, without the backslash characters)</para>
* ## Example launch line
*
* (write everything in one line, without the backslash characters)
* |[
* gst-launch-1.0 filesrc location=photo_camera.png ! pngdec ! \
* videoconvert ! lcms input-profile=sRGB.icc dest-profile=printer.icc \
* pngenc ! filesink location=photo_print.png
* ]|
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -59,8 +59,8 @@
* If the "http_proxy" environment variable is set, its value is used.
* The #GstCurlHttpSrc:proxy property can be used to override the default.
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 curlhttpsrc location=http://127.0.1.1/index.html ! fakesink dump=1
* ]| The above pipeline reads a web page from the local machine using HTTP and
......@@ -70,7 +70,6 @@
* ]| The above pipeline will start up a DASH streaming session from the given
* MPD file. This requires GStreamer to have been built with dashdemux from
* gst-plugins-bad.
* </refsect2>
*/
/*
......
......@@ -31,12 +31,11 @@
* Modplug uses the <ulink url="http://modplug-xmms.sourceforge.net/">modplug</ulink>
* library to decode tracked music in the MOD/S3M/XM/IT and related formats.
*
* <refsect2>
* <title>Example pipeline</title>
* ## Example pipeline
*
* |[
* gst-launch-1.0 -v filesrc location=1990s-nostalgia.xm ! modplug ! audioconvert ! alsasink
* ]| Play a FastTracker xm file.
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -31,13 +31,13 @@
* and on the various available parameters in the documentation
* of the mpeg2enc tool in particular, which shares options with this element.
*
* <refsect2>
* <title>Example pipeline</title>
* ## Example pipeline
*
* |[
* gst-launch-1.0 videotestsrc num-buffers=1000 ! mpeg2enc ! filesink location=videotestsrc.m1v
* ]| This example pipeline will encode a test video source to a an MPEG1
* elementary stream (with Generic MPEG1 profile).
* <para>
*
* Likely, the #GstMpeg2enc:format property
* is most important, as it selects the type of MPEG stream that is produced.
* In particular, default property values are dependent on the format,
......@@ -45,12 +45,11 @@
* Note that the (S)VCD profiles also restrict the image size, so some scaling
* may be needed to accomodate this. The so-called generic profiles (as used
* in the example above) allow most parameters to be adjusted.
* </para>
*
* |[
* gst-launch-1.0 videotestsrc num-buffers=1000 ! videoscale ! mpeg2enc format=1 norm=p ! filesink location=videotestsrc.m1v
* ]| This will produce an MPEG1 profile stream according to VCD2.0 specifications
* for PAL #GstMpeg2enc:norm (as the image height is dependent on video norm).
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -32,19 +32,17 @@
* and the man-page of the mplex tool documents the properties of this element,
* which are shared with the mplex tool.
*
* <refsect2>
* <title>Example pipeline</title>
* ## Example pipeline
*
* |[
* gst-launch-1.0 -v videotestsrc num-buffers=1000 ! mpeg2enc ! mplex ! filesink location=videotestsrc.mpg
* ]| This example pipeline will encode a test video source to an
* MPEG1 elementary stream and multiplexes this to an MPEG system stream.
* <para>
*
* If several streams are being multiplexed, there should (as usual) be
* a queue in each stream, and due to mplex' buffering the capacities of these
* may have to be set to a few times the default settings to prevent the
* pipeline stalling.
* </para>
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -57,12 +57,11 @@
*
* Based on this tutorial: https://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html
*
* <refsect2>
* <title>Example pipelines</title>
* ## Example pipelines
*
* |[
* gst-launch-1.0 -v v4l2src ! videoconvert ! cameraundistort ! cameracalibrate | autovideosink
* ]| will correct camera distortion once camera calibration is done.
* </refsect2>
*/
/*
......
......@@ -55,15 +55,14 @@
*
* Based on this tutorial: https://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html
*
* <refsect2>
* <title>Example pipelines</title>
* ## Example pipelines
*
* |[
* gst-launch-1.0 -v v4l2src ! videoconvert ! cameraundistort settings="???" ! autovideosink
* ]| will correct camera distortion based on provided settings.
* |[
* gst-launch-1.0 -v v4l2src ! videoconvert ! cameraundistort ! cameracalibrate ! autovideosink
* ]| will correct camera distortion once camera calibration is done.
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -46,12 +46,11 @@
*
* Dilates the image with the cvDilate OpenCV function.
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 videotestsrc ! cvdilate ! videoconvert ! autovideosink
* ]|
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -47,12 +47,11 @@
* Equalizes the histogram of a grayscale image with the cvEqualizeHist OpenCV
* function.
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 videotestsrc pattern=23 ! cvequalizehist ! videoconvert ! autovideosink
* ]|
* </refsect2>
*/
......
......@@ -46,12 +46,11 @@
*
* Erodes the image with the cvErode OpenCV function.
*
* <refsect2>
* <title>Example launch line</title>
* Example launch line
*
* |[
* gst-launch-1.0 videotestsrc ! cverode ! videoconvert ! autovideosink
* ]|
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -46,12 +46,11 @@
*
* Applies cvLaplace OpenCV function to the image.
*
* <refsect2>
* <title>Example launch line</title>
* Example launch line
*
* |[
* gst-launch-1.0 videotestsrc ! cvlaplace ! videoconvert ! autovideosink
* ]|
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -46,12 +46,11 @@
*
* Smooths the image using thes cvSmooth OpenCV function.
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 videotestsrc ! cvsmooth ! videoconvert ! autovideosink
* ]|
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -46,12 +46,11 @@
*
* Applies the cvSobel OpenCV function to the image.
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 videotestsrc ! cvsobel ! videoconvert ! autovideosink
* ]|
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -47,12 +47,11 @@
*
* Dewarp fisheye images
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 videotestsrc ! videoconvert ! circle radius=0.1 height=80 ! dewarp outer-radius=0.35 inner-radius=0.1 ! videoconvert ! xvimagesink
* ]|
* </refsect2>
*/
......
......@@ -97,8 +97,8 @@
* [D] Scharstein, D. & Szeliski, R. (2001). A taxonomy and evaluation of dense two-frame stereo
* correspondence algorithms, International Journal of Computer Vision 47: 7–42.
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 videotestsrc ! video/x-raw,width=320,height=240 ! videoconvert ! disp0.sink_right videotestsrc ! video/x-raw,width=320,height=240 ! videoconvert ! disp0.sink_left disparity name=disp0 ! videoconvert ! ximagesink
* ]|
......@@ -112,7 +112,6 @@ gst-launch-1.0 multifilesrc location=~/im3.png ! pngdec ! videoconvert ! di
* |[
gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw,width=320,height=240 ! videoconvert ! disp0.sink_right v4l2src device=/dev/video0 ! video/x-raw,width=320,height=240 ! videoconvert ! disp0.sink_left disparity name=disp0 method=sgbm disp0.src ! videoconvert ! ximagesink
* ]|
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -48,12 +48,11 @@
*
* Performs canny edge detection on videos and images
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 videotestsrc ! videoconvert ! edgedetect ! videoconvert ! xvimagesink
* ]|
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -49,12 +49,11 @@
*
* Blurs faces in images and videos.
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 autovideosrc ! videoconvert ! faceblur ! videoconvert ! autovideosink
* ]|
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -56,8 +56,8 @@
* until the size is &lt;= GstFaceDetect::min-size-width or
* GstFaceDetect::min-size-height.
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 autovideosrc ! decodebin ! colorspace ! facedetect ! videoconvert ! xvimagesink
* ]| Detect and show faces
......@@ -65,7 +65,6 @@
* gst-launch-1.0 autovideosrc ! video/x-raw,width=320,height=240 ! videoconvert ! facedetect min-size-width=60 min-size-height=60 ! colorspace ! xvimagesink
* ]| Detect large faces on a smaller image
*
* </refsect2>
*/
/* FIXME: development version of OpenCV has CV_HAAR_FIND_BIGGEST_OBJECT which
......
......@@ -68,8 +68,8 @@
* extraction using iterated graph cuts, ACM Trans. Graph., vol. 23, pp. 309–314,
* 2004.
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 --gst-debug=grabcut=4 v4l2src device=/dev/video0 ! videoconvert ! grabcut ! videoconvert ! video/x-raw,width=320,height=240 ! ximagesink
* ]|
......@@ -77,7 +77,6 @@
* |[
* gst-launch-1.0 --gst-debug=grabcut=4 v4l2src device=/dev/video0 ! videoconvert ! facedetect display=0 ! videoconvert ! grabcut test-mode=true ! videoconvert ! video/x-raw,width=320,height=240 ! ximagesink
* ]|
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -47,13 +47,12 @@
* FIXME:operates hand gesture detection in video streams and images,
* and enable media operation e.g. play/stop/fast forward/back rewind.
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 autovideosrc ! videoconvert ! "video/x-raw, format=RGB, width=320, height=240" ! \
* videoscale ! handdetect ! videoconvert ! xvimagesink
* ]|
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -47,12 +47,11 @@
*
* Performs motion detection on videos.
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 videotestsrc pattern=18 ! videorate ! videoscale ! video/x-raw,width=320,height=240,framerate=5/1 ! videoconvert ! motioncells ! videoconvert ! xvimagesink
* ]|
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -50,12 +50,11 @@
* color image enhancement." Image Processing, 1996. Proceedings., International
* Conference on. Vol. 3. IEEE, 1996.
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 videotestsrc ! videoconvert ! retinex ! videoconvert ! xvimagesink
* ]|
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -78,12 +78,11 @@
* per Image Pixel for the Task of Background Subtraction", Pattern Recognition
* Letters, vol. 27, no. 7, pages 773-780, 2006.
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert ! segmentation test-mode=true method=2 ! videoconvert ! ximagesink
* ]|
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -46,12 +46,11 @@
*
* Human skin detection on videos and images
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 videotestsrc ! decodebin ! videoconvert ! skindetect ! videoconvert ! xvimagesink
* ]|
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -49,12 +49,11 @@
*
* Performs template matching on videos and images, providing detected positions via bus messages.
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 videotestsrc ! videoconvert ! templatematch template=/path/to/file.jpg ! videoconvert ! xvimagesink
* ]|
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -48,12 +48,11 @@
*
* opencvtextoverlay renders the text on top of the video frames
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 videotestsrc ! videoconvert ! opencvtextoverlay text="Opencv Text Overlay " ! videoconvert ! xvimagesink
* ]|
* </refsect2>
*/
#ifdef HAVE_CONFIG_H
......
......@@ -26,12 +26,11 @@
* It uses the <ulink url="https://lib.openmpt.org">OpenMPT library</ulink>
* for this purpose. It can be autoplugged and therefore works with decodebin.
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 filesrc location=media/example.it ! openmptdec ! audioconvert ! audioresample ! autoaudiosink
* ]|
* </refsect2>
*/
......
......@@ -33,15 +33,9 @@
* More concretely on the "libopenni2-dev" and "libopenni2" packages - that can
* be downloaded in http://goo.gl/2H6SZ6.
*
* <refsect2>
* <title>Examples</title>
* <para>
* Some recorded .oni files are available at:
* <programlisting>
* http://people.cs.pitt.edu/~chang/1635/proj11/kinectRecord
* </programlisting>
* </para>
* </refsect2>
* ## Examples
*
* Some recorded .oni files are available at <http://people.cs.pitt.edu/~chang/1635/proj11/kinectRecord>
*/
......
......@@ -18,22 +18,17 @@
/**
* SECTION:element-openni2src
*
* <refsect2>
* <title>Examples</title>
* <para>
* Some recorded .oni files are available at:
* <programlisting>
* http://people.cs.pitt.edu/~chang/1635/proj11/kinectRecord
* </programlisting>
* ## Examples
*
* <programlisting>
LD_LIBRARY_PATH=/usr/lib/OpenNI2/Drivers/ gst-launch-1.0 --gst-debug=openni2src:5 openni2src location='Downloads/mr.oni' sourcetype=depth ! videoconvert ! ximagesink
* </programlisting>
* <programlisting>
LD_LIBRARY_PATH=/usr/lib/OpenNI2/Drivers/ gst-launch-1.0 --gst-debug=openni2src:5 openni2src location='Downloads/mr.oni' sourcetype=color ! videoconvert ! ximagesink
* </programlisting>
* </para>
* </refsect2>
* Some recorded .oni files are available at <http://people.cs.pitt.edu/~chang/1635/proj11/kinectRecord>
*
* ``` shell
* LD_LIBRARY_PATH=/usr/lib/OpenNI2/Drivers/ gst-launch-1.0 --gst-debug=openni2src:5 openni2src location='Downloads/mr.oni' sourcetype=depth ! videoconvert ! ximagesink
* ```
*
* ``` shell
* LD_LIBRARY_PATH=/usr/lib/OpenNI2/Drivers/ gst-launch-1.0 --gst-debug=openni2src:5 openni2src location='Downloads/mr.oni' sourcetype=color ! videoconvert ! ximagesink
* ```
*/
#ifdef HAVE_CONFIG_H
......
......@@ -26,8 +26,8 @@
* srtsink is a network sink that sends <ulink url="http://www.srtalliance.org/">SRT</ulink>
* packets to the network.
*
* <refsect2>
* <title>Examples</title>
* ## Examples</title>
*
* |[
* gst-launch-1.0 -v audiotestsrc ! srtsink uri=srt://host
* ]| This pipeline shows how to serve SRT packets through the default port.
......@@ -35,7 +35,6 @@
* |[
* gst-launch-1.0 -v audiotestsrc ! srtsink uri=srt://:port
* ]| This pipeline shows how to wait SRT callers.
* </refsect2>
*
*/
......
......@@ -26,8 +26,7 @@
* srtsrc is a network source that reads <ulink url="http://www.srtalliance.org/">SRT</ulink>
* packets from the network.
*
* <refsect2>
* <title>Examples</title>
* ## Examples
* |[
* gst-launch-1.0 -v srtsrc uri="srt://127.0.0.1:7001" ! fakesink
* ]| This pipeline shows how to connect SRT server by setting #GstSRTSrc:uri property.
......@@ -39,7 +38,6 @@
* |[
* gst-launch-1.0 -v srtclientsrc uri="srt://192.168.1.10:7001?mode=rendez-vous" ! fakesink
* ]| This pipeline shows how to connect SRT server by setting #GstSRTSrc:uri property and using the rendez-vous mode.
* </refsect2>
*
*/
......
......@@ -26,12 +26,11 @@
* It uses <ulink url="https://www.mindwerks.net/projects/wildmidi/">WildMidi</ulink>
* for this purpose. It can be autoplugged and therefore works with decodebin.
*
* <refsect2>
* <title>Example launch line</title>
* ## Example launch line
*
* |[
* gst-launch-1.0 filesrc location=media/example.mid ! wildmididec ! audioconvert ! audioresample ! autoaudiosink
* ]|
* </refsect2>
*/
......
......@@ -293,11 +293,11 @@ struct _GstNonstreamAudioDecoder
*
* All functions are called with a locked decoder mutex.
*
* <note> If GST_ELEMENT_ERROR, GST_ELEMENT_WARNING, or GST_ELEMENT_INFO are called from
* inside one of these functions, it is strongly recommended to unlock the decoder mutex
* before and re-lock it after these macros to prevent potential deadlocks in case the
* application does something with the element when it receives an ERROR/WARNING/INFO
* message. Same goes for gst_element_post_message() calls and non-serialized events. </note>
* > If GST_ELEMENT_ERROR, GST_ELEMENT_WARNING, or GST_ELEMENT_INFO are called from
* > inside one of these functions, it is strongly recommended to unlock the decoder mutex
* > before and re-lock it after these macros to prevent potential deadlocks in case the
* > application does something with the element when it receives an ERROR/WARNING/INFO
* > message. Same goes for gst_element_post_message() calls and non-serialized events.
*
* By default, this class works by reading media data from the sinkpad, and then commencing
* playback. Some decoders cannot be given data from a memory block, so the usual way of
......
......@@ -29,13 +29,8 @@
*
* The design mandates that the subclasses implement the following features and
* behaviour:
* <itemizedlist>
* <listitem><para>
* 3 pads: viewfinder, image capture, video capture
* </para></listitem>
* <listitem><para>
* </para></listitem>
* </itemizedlist>
*
* * 3 pads: viewfinder, image capture, video capture
*
* During construct_pipeline() vmethod a subclass can add several elements into
* the bin and expose 3 srcs pads as ghostpads implementing the 3 pad templates.
......
......@@ -34,10 +34,8 @@
*
* The interface allows access to some common digital image capture parameters.
*
* <note>
* The GstPhotography interface is unstable API and may change in future.
* One can define GST_USE_UNSTABLE_API to acknowledge and avoid this warning.
* </note>
* > The GstPhotography interface is unstable API and may change in future.
* > One can define GST_USE_UNSTABLE_API to acknowledge and avoid this warning.
*/
static void gst_photography_iface_base_init (GstPhotographyInterface * iface);
......
......@@ -50,38 +50,17 @@ G_BEGIN_DECLS
* Name of custom GstMessage that will be posted to #GstBus when autofocusing
* is complete.
* This message contains following fields:
* <itemizedlist>
* <listitem>
* <para>
* #GstPhotographyFocusStatus
* <classname>&quot;status&quot;</classname>:
* Tells if focusing succeeded or failed.
* </para>
* </listitem>
* <listitem>
* <para>
* #G_TYPE_INT
* <classname>&quot;focus-window-rows&quot;</classname>:
* Tells number of focus matrix rows.
* </para>
* </listitem>
* <listitem>
* <para>
* #G_TYPE_INT
* <classname>&quot;focus-window-columns&quot;</classname>:
* Tells number of focus matrix columns.
* </para>
* </listitem>
* <listitem>
* <para>
* #G_TYPE_INT
* <classname>&quot;focus-window-mask&quot;</classname>:
* Bitmask containing rows x columns bits which mark the focus points in the
* focus matrix. Lowest bit (LSB) always represents the top-left corner of the
* focus matrix. This field is only valid when focusing status is SUCCESS.
* </para>
* </listitem>
* </itemizedlist>
*
* * `status` (#GstPhotographyFocusStatus): Tells if focusing succeeded or failed.
*
* * `focus-window-rows` (#G_TYPE_INT): Tells number of focus matrix rows.
*
* * `focus-window-columns` (#G_TYPE_INT): Tells number of focus matrix columns.
*
* * `focus-window-mask` (#G_TYPE_INT): Bitmask containing rows x columns bits
* which mark the focus points in the focus matrix. Lowest bit (LSB) always
* represents the top-left corner of the focus matrix. This field is only valid
* when focusing status is SUCCESS.
*/
#define GST_PHOTOGRAPHY_AUTOFOCUS_DONE "autofocus-done"
......@@ -93,15 +72,8 @@ G_BEGIN_DECLS
* becoming "shaken" due to camera movement and too long exposure time.
*
* This message contains following fields:
* <itemizedlist>
* <listitem>
* <para>
* #GstPhotographyShakeRisk
* <classname>&quot;status&quot;</classname>:
* Tells risk level of capturing shaken image.
* </para>
* </listitem>
* </itemizedlist>
*
* * `status` (#GstPhotographyShakeRisk): Tells risk level of capturing shaken image.