
Ian Romanick authored
A previous method that I tried was treating the helped and hurt data as samples from separate populations, and these were compared using a Ttest. Since we're applying a common change to "both" sample sets, I don't think this is a valid analysis. Instead, I think it is more valid to look at the entire change set as a sample of a single population and compare the mean of that sample to zero. Only the changed samples are examined because the vast majority of the sample in unaffected. If the mean of the entire sample was used, the mean confidence interval would always include zero. It would be more valid, I believe, include shaders that were affected but had no change in instruction or cycle count. I don't know of a way to determine this using the existing shaderdb infrastructure. These two different methods communicate two different things. The first tries to determine whether the shaders hurt are affected more or less than the shaders helped. This doesn't capture any information about the number of shaders affected. There might be 1,000 shaders helped and 3 hurt, and the conclusion could still be negative. The second methods trieds to determine whether the sample set is overall helped or hurt. This allows the magnitued of hurt (or help) to be overwhelmed by the number of helped (or hurt) shaders. There could be 1,000 shaders helped by 1 instruction and 3 shaders hurt by 50 instructions, and the conclusion would be positive. Comparing the declared result with the mean and median, I feel like the second method matches my intuitive interpretation of the data. Here is a result of the Ttest: total cycles in shared programs: 559379982 > 559342256 (<.01%) cycles in affected programs: 10791218 > 10753492 (0.35%) helped: 1952 HURT: 908 helped stats (abs) min: 1 max: 5762 x̄: 37.71 x̃: 16 helped stats (rel) min: <.01% max: 28.57% x̄: 3.54% x̃: 2.09% HURT stats (abs) min: 1 max: 573 x̄: 39.51 x̃: 10 HURT stats (rel) min: <.01% max: 27.78% x̄: 1.93% x̃: 0.66% abs t: 0.34, p: 73.70% rel t: 9.88, p: <.01% Inconclusive result (cannot disprove both null hypothoses). And here is the result of the mean confidence interval tests on the same data: total cycles in shared programs: 559378112 > 559340386 (<.01%) cycles in affected programs: 10791218 > 10753492 (0.35%) helped: 1952 HURT: 908 helped stats (abs) min: 1 max: 5762 x̄: 37.71 x̃: 16 helped stats (rel) min: <.01% max: 28.57% x̄: 3.54% x̃: 2.09% HURT stats (abs) min: 1 max: 573 x̄: 39.51 x̃: 10 HURT stats (rel) min: <.01% max: 27.78% x̄: 1.93% x̃: 0.66% 95% mean confidence interval for cycles value: 18.27 8.11 95% mean confidence interval for cycles %change: 1.98% 1.63% Cycles are helped. Since the confidence interval is calculated based on the sample mean and the sample standard deviation, it can include values out side the sample minimum and maximum. This can lead to unexpected conclusions. In this case all of the affected shaders were helped, but the result is inconclusive. total instructions in shared programs: 7886959 > 7886925 (<.01%) instructions in affected programs: 1340 > 1306 (2.54%) helped: 4 HURT: 0 helped stats (abs) min: 2 max: 15 x̄: 8.50 x̃: 8 helped stats (rel) min: 0.63% max: 4.30% x̄: 2.45% x̃: 2.43% 95% mean confidence interval for instructions value: 20.44 3.44 95% mean confidence interval for instructions %change: 5.78% 0.89% Inconclusive result (value mean confidence interval includes 0). v2: Don't log statistics for spill or fills. Simplify Ttest logging. v3: Use confidence interval instead. Ackedby: Jason Ekstrand jason@jlekstrand.net
1cba7ba9
Name 
Last commit

Last update 

licenses  
shaders  
.gitignore  
COPYING  
Makefile  
README.md  
check_dependencies.pl  
fdreport.py  
intel_run  
intel_stub.c  
nvreport.py  
report.py  
run.c  
run.py  
sireport.py 