diff --git a/thesis/chapters/03.01-benchmark.tex b/thesis/chapters/03.01-benchmark.tex index dca4a89..5770ebe 100644 --- a/thesis/chapters/03.01-benchmark.tex +++ b/thesis/chapters/03.01-benchmark.tex @@ -157,11 +157,11 @@ For running multiple benchmarks the class \texttt{BenchmarkSuite} was created. I The user interface has three regions. One for configuring input parameters. One for controlling the benchmark process and at last a diagram of the results. -\paragraph{Settings} At first the input parameters of the algorithm have to be specified. For that there are some polylines prepared to choose from. They are introduced in chapter \ref{ch:benchmark-data}. Instead of testing a single tolerance value the user can specify a range. This way the behavior of the algorithms can be observed in one chart. The high Quality mode got its name from Simplify.js. If it is enabled there will be no radial-distance preprocessing step before applying the Douglas-Peucker routine. The next option determines which benchmarks will be run. The options are mentioned in chapter \ref{ch:benchmark-framework}. One of the three benchmark methods implemented can be selected. Depending on the method chosen additional options will show to further specify the benchmark parameters. The last option deals with chart rendering. Debouncing limits the rate at which functions fire. In this case the chart will delay rendering when datapoints come in at a fast rate. +\paragraph{Settings} At first the input parameters of the algorithm have to be specified. For that there are some polylines prepared to choose from. They are introduced in chapter \ref{ch:benchmark-data}. Instead of testing a single tolerance value the user can specify a range. This way the behavior of the algorithms can be observed in one chart. The high quality mode got its name from Simplify.js. If it is enabled there will be no radial-distance preprocessing step before applying the Douglas-Peucker routine. The next option determines which benchmarks will be run. The options are mentioned in chapter \ref{ch:benchmark-framework}. One of the three benchmark methods implemented can be selected. Depending on the method chosen additional options will show to further specify the benchmark parameters. The last option deals with chart rendering. Debouncing limits the rate at which functions fire. In this case the chart will delay rendering when datapoints come in at a fast rate. \paragraph{Run Benchmark} This is the control that displays the status of the benchmark suite. Here benchmarks can be started, stopped, paused and resumed. It also shows the progress of the benchmarks completed in percentage and absolute numbers. -\paragraph{Chart} +\paragraph{Chart} The chart shows a live diagram of the results. The title represents the selected chart. The legend gives information on which benchmark cases will run. Also the algorithm parameters (dataset and high quality mode) and current platform description can be found here. The tolerance range maps over the x-Axis. On the y-Axis two scales can be seen. The left hand shows by which unit the performance is displayed. This scale corresponds to the colored lines. Every chart will show the number of positions in the result as a grey line. Its scale is displayed on the right. This information is important for selecting a proper tolerance range as it shows if a appropriate order of magnitude has been chosen. Below the chart additional control elements are placed to adjust the visualization. The first selection lets the user choose between a linear or logarithmic y-Axis. The second one changes the unit of measure for performance. The two options are the mean time in milliseconds per operation (ms) and the number of operations that can be run in one second (hz). These options are only available for the chart "Simplify.wasm vs. Simplify.js" as the other two charts are stacked bar charts where changing the default options won't make sense. Finally the result can be saved via a download button. A separate page can be fed with this file to display the diagram only. \subsubsection{The test data} diff --git a/thesis/images/benchmark-ui.png b/thesis/images/benchmark-ui.png index f07cc7b..ac6abdc 100644 Binary files a/thesis/images/benchmark-ui.png and b/thesis/images/benchmark-ui.png differ diff --git a/thesis/main.lof b/thesis/main.lof index 7864941..f923d77 100644 --- a/thesis/main.lof +++ b/thesis/main.lof @@ -1,4 +1,4 @@ \contentsline {figure}{\numberline {1}{\ignorespaces UML diagram of the benchmarking application}}{17}{figure.1}% \contentsline {figure}{\numberline {2}{\ignorespaces The user interface for benchmarking application. (not final)}}{19}{figure.2}% -\contentsline {figure}{\numberline {3}{\ignorespaces The state model of the application}}{23}{figure.3}% +\contentsline {figure}{\numberline {3}{\ignorespaces The state model of the application}}{24}{figure.3}% \contentsline {figure}{\numberline {4}{\ignorespaces The user interface for the algorithm comparison. (not final)}}{25}{figure.4}% diff --git a/thesis/main.pdf b/thesis/main.pdf index 5c89699..617676c 100644 Binary files a/thesis/main.pdf and b/thesis/main.pdf differ