mt-polygon-simplification/thesis/chapters/04.00-results.tex
2019-08-24 23:04:21 +02:00

142 lines
13 KiB
TeX

\section[Results]{Benchmark results}
In this chapter the results are presented. There were a multiple tests performed. Multiple devices were used to run several benchmarks on different browsers and under various parameters. To decide which benchmarks had to run, first all the problem dimensions were clarified. Devices will be categorized into desktop and mobile devices. The browsers to test will come from the four major browser vendors which were involved in WebAssembly development. These are Firefox from Mozilla, Chrome from Google, Edge from Microsoft and Safari from Apple. For either of the two data sets a fixed range of tolerances is set to maintain consistency across the diagrams. The other parameter "high quality" can be either switched on or off. The three chart types are explained in chapter \ref{ch:benchmark-cases}.
All benchmark results shown here can be interactively explored at the web page provided together with this thesis. The static files lie in the \path{build} folder. The results can be found when following the "show prepared results"-link on the home page.
Each section in this chapter describes a set of benchmarks run on the same system. A table in the beginning will indicate the problem dimensions chosen to inspect. After a description of the system and a short summary of the case the results will be presented in the form of graphs. Those are the graphs produced from the application described in chapter \ref{ch:benchmark-app}. Here the results will only be briefly characterized. A further analysis will follow in the next chapter.
\subsection{Case 1 - WebAssembly vs JavaScript in different browsers}
\label{ch:case1}
\begin{table}[htb]
\centering
\includegraphics[width=.75\linewidth]{./images/dimensions-1.png}
\caption{Problem dimensions of Case 1}
\label{tbl:dimensions-1}
\end{table}
At first it will be observed how the algorithms perform under different browsers. The chart to use for this is the "Simplify.js vs Simplify.wasm" chart. For that a Windows system was chosen as it allows to run benchmarks under three of the four browsers in question. The dataset is the Simplify.js example which will be simplified with and without the high quality mode.
\\ % to prevent footnote split
The device is a HP Pavilion x360 - 14-ba101ng\footnote{\url{https://support.hp.com/us-en/product/hp-pavilion-14-ba100-x360-convertible-pc/16851098/model/18280360/document/c05691748}} convertible. It contains an Intel® Core™ i5-8250U Processor with 4 cores and 6MB cache. The operating system is Windows 10 and the browsers are on their newest versions with Chrome 75, Firefox 68 and Edge 44.18362.1.0.
Table \ref{tbl:dimensions-1} summarizes the setting. For each problem dimension the chosen characteristics are highlighted in green color. The number of benchmark diagrams in a chapter is determined by the multitude of characteristics selected. In the case here there are three browsers tested each with two quality options resulting in six diagrams to be produced.
\input{./results-benchmark/win_ffox_simplify_vs_false.tex}
\input{./results-benchmark/win_ffox_simplify_vs_true.tex}
The first two graphs (figure \ref{fig:win_ffox_simplify_vs_false} and \ref{fig:win_ffox_simplify_vs_true}) show the results for the Firefox browser. Here and in all subsequent charts of this chapter the red line indicates the performance of Simplify.wasm, the blue line represents Simplify.js and the green line its alternative that operates on coordinates as nested arrays. The gray line represents the number of positions that remain in the simplified polyline.
Simplify.js runs without the high quality mode per default. Here at the smallest tolerance chosen the WebAssembly solution is the fastest method. It is overtaken immediately by the original JavaScript implementation where it continues to be the fastest one of the three methods. The alternative is slowest in every case.
In the case of the high quality mode enabled however the original and the WebAssembly solution switch places. Here Simplify.wasm is always faster. The Simplify.js alternative clearly separates itself by being much slower than the other two. It does however have a steeper curve as the original and the WebAssembly solution have pretty consistent performance through the whole tolerance range.
\input{./results-benchmark/win_chro_simplify_vs_false.tex}
\input{./results-benchmark/win_chro_simplify_vs_true.tex}
Figure \ref{fig:win_chro_simplify_vs_false} and \ref{fig:win_chro_simplify_vs_true} show the results under Chrome for the same setting. Here the performance seem to be switched around with the original being the slowest method in both cases. This version has however very inconsistent results. There is no clear curvature which indicates some outside influence to the results. Either there is a flaw in the implementation or a special case of engine optimization was hit.
Without high quality mode the Simplify.wasm gets overtaken by the Simplify.js alternative at 0.4 tolerance. From there on the WebAssembly solution stagnates while the JavaScript one continues to get faster. With high quality enabled the performance gain of WebAssembly is more clear than in Firefox. Here the Simplify.js alternative is the second fastest followed by its original.
\input{./results-benchmark/win_edge_simplify_vs_false.tex}
\input{./results-benchmark/win_edge_simplify_vs_true.tex}
Interestingly, in the Edge browser the two JavaScript algorithms perform more alike when high quality disabled. As can be seen in figure \ref{fig:win_edge_simplify_vs_false}, the turning point where WebAssembly is not the fastest is at around 0.45 to 0.6. When turning high quality on, the graph in figure \ref{fig:win_edge_simplify_vs_true} resembles the chart from Chrome only with more consistent results for the original implementation.
\FloatBarrier
\subsection{Case 2 - Simplify.wasm runtime analysis}
\label{ch:case2}
\begin{table}[!htb]
\centering
\includegraphics[width=.75\linewidth]{./images/dimensions-2.png}
\caption{Problem dimensions of Case 2}
\label{tbl:dimensions-2}
\end{table}
For this case the same device as in the former case is used. To compare the results of the two cases the same dataset is used. Under the Edge browser the Simplify.wasm runtime analysis was measured. Table \ref{tbl:dimensions-2} summarizes this.
\input{./results-benchmark/win_edge_simplify_stack_false.tex}
\input{./results-benchmark/win_edge_simplify_stack_true.tex}
The bar charts visualize where the time is spent in the Simplify.wasm implementation. Each data point contains a stacked column to represent the proportion of time spent for each task. The blue section represents the time spent to initialize the memory, the red one the execution of the compiled WebAssembly code. At last the green part will show the time spent for getting the coordinates back in the right format.
Inspecting figures \ref{fig:win_edge_simplify_stack_false} and \ref{fig:win_edge_simplify_stack_true} one immediately notices that the time spent for the memory preparation does not vary in either of the two cases. Also very little time is needed to load the result back from memory especially as the tolerance gets higher. Further analysis of that will follow in chapter \ref{ch:discussion} as mentioned.
In the case of high quality disabled, the results show a very steep curve of the execution time. Quickly the time span for preparing the memory dominates the process. In the second graph it can be seen that the fraction is significantly lower due to the execution time being consistently higher.
\FloatBarrier
\subsection{Case 3 - Benchmarking Safari on MacOS}
\label{ch:case3}
\begin{table}[!htb]
\centering
\includegraphics[width=.75\linewidth]{./images/dimensions-3.png}
\caption{Problem dimensions of Case 3}
\label{tbl:dimensions-3}
\end{table}
A 2018 MacBook Pro 15" will be used to test the safari browser. For comparison the benchmarks will also be held under Firefox on MacOS. This time the bavarian boundary will be simplified with both preprocessing enabled and disabled.
\input{./results-benchmark/mac_ffox_bavaria_vs_false.tex}
\input{./results-benchmark/mac_ffox_bavaria_vs_true.tex}
At first figure \ref{fig:mac_ffox_bavaria_vs_false} and \ref{fig:mac_ffox_bavaria_vs_true} show the setting under Firefox. And indeed they are comparable to the results from chapter \ref{ch:case1}. In the case of high quality disabled WebAssembly is fastest for lower tolerances. After a certain point the original is faster while the alternative comes close to WebAssembly performance but without intersection. When enabling the high quality mode the original is more close to Simplify.wasm without being faster. The JavaScript alternative is still trailing behind.
\input{./results-benchmark/mac_safa_bavaria_vs_false.tex}
\input{./results-benchmark/mac_safa_bavaria_vs_true.tex}
The results of the Safari browser with high quality disabled (figure \ref{fig:mac_safa_bavaria_vs_false}) resembles the figure \ref{fig:win_edge_simplify_vs_false} where the Edge browser was tested. Both JavaScript versions with similar performance surpass the WebAssembly version at one point. Unlike the Edge results the original implementation is slightly ahead.
When turning on high quality mode the JavaScript implementations still perform alike. However, Simplify.wasm is clearly faster as seen in figure \ref{fig:mac_safa_bavaria_vs_true}. Simplify.wasm performs here about twice as fast as the algorithms implemented in JavaScript. Those however have a steeper decrease as the tolerance numbers go up.
\FloatBarrier
\subsection{Case 4 - Measuring the Turf.js method}
\label{ch:case4}
\begin{table}[!htb]
\centering
\includegraphics[width=.75\linewidth]{./images/dimensions-4.png}
\caption{Problem dimensions of Case 4}
\label{tbl:dimensions-4}
\end{table}
In this case the system is a Lenovo Miix 510 convertible with Ubuntu 19.04 as the operating system. Again the bavarian outline is used for simplification with both quality settings. It will be observed if the Turf.js implementation is reasonable. The third kind of chart is in use here, which is similar to the Simplify.wasm insights. There are also stacked bar charts used to visualize the time spans of subtasks. The results will be compared to the graphs of the Simplify.js vs. Simplify.wasm chart. As the Turf.js method only makes sense when the original version is faster than the alternative, the benchmarks are performed in the Firefox browser.
\input{./results-benchmark/ubu_ffox_bavaria_vs_true.tex}
\input{./results-benchmark/ubu_ffox_bavaria_jsstack_true.tex}
Figure \ref{fig:ubu_ffox_bavaria_vs_true} shows how the JavaScript versions perform with high quality enabled. Here it is clear that the original version is prefereable. In figure \ref{fig:ubu_ffox_bavaria_jsstack_true} one can see the runtime of the Turf.js method. The red bar here stands for the runtime of the Simplify.js function call. The blue and green bar is the time taken for the format transformations before and after the algorithm. Again the preparation of the original data takes significantly longer than the modification of the simplified line. When the alternative implementation is so much slower than the original it is actually more performant to transform the data format. More analysis as mentioned follows in the next chapter.
\input{./results-benchmark/ubu_ffox_bavaria_vs_false.tex}
\input{./results-benchmark/ubu_ffox_bavaria_jsstack_false.tex}
The next two figures show the case when high quality is disabled. In figure \ref{fig:ubu_ffox_bavaria_vs_false} two algorithms seem to converge. And when looking at figure \ref{fig:ubu_ffox_bavaria_jsstack_false} one can see that the data preparation gets more costly as the tolerance rises. From a tolerance of 0.0014 on the alternative Simplify.js implementation is faster than the Turf.js method.
\FloatBarrier
\subsection{Case 5 - Mobile benchmarking}
\label{ch:case5}
\begin{table}[!htb]
\centering
\includegraphics[width=.75\linewidth]{./images/dimensions-5.png}
\caption{Problem dimensions of Case 5}
\label{tbl:dimensions-5}
\end{table}
At last the results from a mobile device are shown. The device is an iPad Air with iOS version 12.4. The Simplify.js example is being generalized using Safari and the Firefox browser. Again both quality settings are used for the benchmarks.
\input{./results-benchmark/ipad_safa_simplify_vs_false.tex}
\input{./results-benchmark/ipad_safa_simplify_vs_true.tex}
When the high quality parameter is left in its default state the WebAssembly solution is fastest on low tolerance numbers (figure \ref{fig:ipad_safa_simplify_vs_false}). As seen before the JavaScript versions are getting faster when the tolerance increases. The original Simplify.js version surpasses the WebAssembly performance while the alternative tangents it. As it was the case on the desktop system the algorithms perform similarly when high quality is set to \texttt{true}. Figure \ref{fig:ipad_safa_simplify_vs_true} shows that Simplify.wasm is also here the faster method.
\input{./results-benchmark/ipad_ffox_simplify_vs_false.tex}
\input{./results-benchmark/ipad_ffox_simplify_vs_true.tex}
Interestingly the results in figure \ref{fig:ipad_ffox_simplify_vs_false} and \ref{fig:ipad_ffox_simplify_vs_true} show the exact same results as the Safari results. In chapter \ref{ch:discussion} this will be further examined.