Commit 272e7dc8 authored by Stefan Kost's avatar Stefan Kost
Browse files

docs/design/part-live-source.txt: describe howto handle latency

Original commit message from CVS:
* docs/design/part-live-source.txt:
describe howto handle latency
* docs/random/ensonic/profiling.txt:
more ideas
* tools/gst-plot-timeline.py:
fix log parsing for solaris, remove unused function
parent 42d34f86
2006-10-16 Stefan Kost <ensonic@users.sf.net>
* docs/design/part-live-source.txt:
describe howto handle latency
* docs/random/ensonic/profiling.txt:
more ideas
* tools/gst-plot-timeline.py:
fix log parsing for solaris, remove unused function
2006-10-16 Wim Taymans <wim@fluendo.com>
* docs/design/part-trickmodes.txt:
......
......@@ -38,7 +38,10 @@ time the data was captured. Normally it will take some time to capture
the first sample of data and the last sample. This means that when the
buffer arrives at the sink, it will already be late and will be dropped.
The latency is the time it takes to construct one buffer of data.
The latency is the time it takes to construct one buffer of data. This latency
could be exposed by latency queries.
Theses latency queries need to be done by the managing pipeline for all sinks.
They can only be done after the meassurements have been taken (all are
prerolled). Thus in pipeline:state_changed:PAUSED_TO_PLAYING we need
get the max-latency and set this as a sync-offset in all sinks.
$Id$
= core profiling =
* scheduler keeps a list of usecs the process function of each element was
running
* process functions are: loop, chain, get
* scheduler keeps a sum of all times
* each gst-element has a profile_percentage field
= profiling =
* when going to play
* scheduler sets sum and all usecs in the list to 0
* when handling an element
* remember old usecs t_old
* take time t1
* call elements processing function
* take time t2
* t_new=t2-t1
* sum+=(t_new-t_old)
* profile_percentage=t_new/sum;
* should the percentage be averaged?
* profile_percentage=(profile_percentage+(t_new/sum))/2.0;
== what information is interesting? ==
* pipeline throughoutput
if we know the cpu-load for a given datastream, we could extrapolate what the
system can handle
-> qos profiling
* load distribution
which element causes which cpu load/memory usage
* the profile_percentage shows how much CPU time the element uses in relation
to the whole pipeline
= qos profiling =
......@@ -45,15 +32,46 @@ $Id$
* idea2: query data (via gst-launch)
* add -r, --report option to gst-launch
* send duration to get total number of frames (GST_FORMAT_DEFAULT for video is frames)
* during playing we need to somehow listen to/capture/intercept QOS-events to record
'streamtime,proportion' pairs
* during playing we need to capture QOS-events to record 'streamtime,proportion' pairs
gst_pad_add_event_probe(video_sink->sink_pad,handler,data)
* during playback we like to know when an elemnt drops frames
what about elements sending a qos_action message?
* after EOS, send qos-queries to each element in the pipeline
* qos-query will return:
number of frames rendered
number of frames dropped
* print a nice table with the results
* QOS stats first
* list of 'streamtime,proportion' pairs
+ robust
+ also available to application
- changes in core
= core profiling =
* scheduler keeps a list of usecs the process function of each element was
running
* process functions are: loop, chain, get, they are driven by gst_pad_push() and
gst_pad_pull_range()
* scheduler keeps a sum of all times
* each gst-element has a profile_percentage field
* when going to play
* scheduler sets sum and all usecs in the list to 0
* when handling an element
* remember old usecs t_old
* take time t1
* call elements processing function
* take time t2
* t_new=t2-t1
* sum+=(t_new-t_old)
* profile_percentage=t_new/sum;
* should the percentage be averaged?
* profile_percentage=(profile_percentage+(t_new/sum))/2.0;
* the profile_percentage shows how much CPU time the element uses in relation
to the whole pipeline
* check get_rusage() based cpu usage detection in buzztard
......@@ -14,8 +14,8 @@ import sys
import cairo
FONT_NAME = "Bitstream Vera Sans"
FONT_SIZE = 9
PIXELS_PER_SECOND = 1000
FONT_SIZE = 8
PIXELS_PER_SECOND = 1700
PIXELS_PER_LINE = 12
PLOT_WIDTH = 1400
TIME_SCALE_WIDTH = 20
......@@ -25,7 +25,7 @@ LOG_MARKER_WIDTH = 20
BACKGROUND_COLOR = (0, 0, 0)
# assumes GST_DEBUG_LOG_COLOR=1
mark_regex = re.compile (r'^(\d:\d\d:\d\d\.\d+) \d+ 0x[0-9a-f]+ [A-Z]+ +([a-zA-Z_]+ )(.*)')
mark_regex = re.compile (r'^(\d:\d\d:\d\d\.\d+) +\d+ 0?x?[0-9a-f]+ [A-Z]+ +([a-zA-Z_]+ )(.*)')
mark_timestamp_group = 1
mark_program_group = 2
mark_log_group = 3
......@@ -74,18 +74,8 @@ palette = [
class SyscallParser:
def __init__ (self):
self.pending_execs = []
self.syscalls = []
def search_pending_execs (self, search_pid):
n = len (self.pending_execs)
for i in range (n):
(pid, timestamp, command) = self.pending_execs[i]
if pid == search_pid:
return (i, timestamp, command)
return (None, None, None)
def add_line (self, str):
m = mark_regex.search (str)
if m:
......@@ -102,7 +92,8 @@ class SyscallParser:
program_hash = program.__hash__ ()
s.colors = palette[program_hash % len (palette)]
self.syscalls.append (s)
else:
print 'No log in %s' % str
return
def parse_strace(filename):
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment