writing
This commit is contained in:
parent
65e63dee92
commit
0ce3d849cd
@ -6,6 +6,6 @@
|
||||
\input{./chapters/02.04-webruntime.tex}
|
||||
\input{./chapters/03.00-methodology.tex}
|
||||
\input{./chapters/04.00-results.tex}
|
||||
\input{./chapters/05-conclusion.tex}
|
||||
\input{./chapters/05-discussion.tex}
|
||||
\input{./chapters/06-conclusion.tex}
|
||||
\input{./chapters/07-application.tex}
|
||||
|
@ -154,10 +154,10 @@ For running multiple benchmarks the class \texttt{BenchmarkSuite} was created. I
|
||||
\end{figure}
|
||||
|
||||
Figure \ref{fig:benchmarking-statemachine} shows the state machine of the suite. Based on this diagram the ui component shows action buttons so the user can interact with the state. While running the suite checks if a state change was requested and acts accordingly by pausing the benchmarks or resetting all statistics gathered when stopping.
|
||||
\clearpage
|
||||
|
||||
\subsubsection{The user interface}
|
||||
|
||||
The user interface has three regions. One for configuring input parameters. One for controlling the benchmark process and at last a diagram of the results.
|
||||
The user interface has three regions. One for configuring input parameters. One for controlling the benchmark process and at last a diagram of the results. Figure \ref{fig:benchmark-ui} shows the user interface.
|
||||
|
||||
\begin{figure}[!htb]
|
||||
\centering
|
||||
@ -166,8 +166,6 @@ The user interface has three regions. One for configuring input parameters. One
|
||||
\label{fig:benchmark-ui}
|
||||
\end{figure}
|
||||
|
||||
|
||||
|
||||
\paragraph{Settings} At first the input parameters of the algorithm have to be specified. For that there are some polylines prepared to choose from. They are introduced in chapter \ref{ch:benchmark-data}. Instead of testing a single tolerance value the user can specify a range. This way the behavior of the algorithms can be observed in one chart. The high quality mode got its name from Simplify.js. If it is enabled there will be no radial-distance preprocessing step before applying the Douglas-Peucker routine. The next option determines which benchmarks will be run. The options are mentioned in chapter \ref{ch:benchmark-cases}. One of the three benchmark methods implemented can be selected. Depending on the method chosen additional options will show to further specify the benchmark parameters. The last option deals with chart rendering. Debouncing limits the rate at which functions fire. In this case the chart will delay rendering when datapoints come in at a fast rate.
|
||||
|
||||
\paragraph{Run Benchmark} This is the control that displays the status of the benchmark suite. Here benchmarks can be started, stopped, paused and resumed. It also shows the progress of the benchmarks completed in percentage and absolute numbers.
|
||||
@ -197,7 +195,7 @@ The second polyline used for benchmarking contains 116.829 positions. It represe
|
||||
|
||||
\begin{figure}[htb]
|
||||
\centering
|
||||
\includegraphics[width=.9\linewidth]{images/dataset-bavaria.png}
|
||||
\includegraphics[width=.7\linewidth]{images/dataset-bavaria.png}
|
||||
\caption{The Bavaria test data visualized}
|
||||
\label{fig:dataset-bavaria}
|
||||
\end{figure}
|
||||
|
@ -2,10 +2,13 @@
|
||||
|
||||
In this chapter the results are presented. There were a multitude of tests to make. Multiple devices were used to run several benchmarks on different browsers and under various parameters. To organize which benchmarks had to run, first all the problem dimensions were clarified. Devices will be categorized into desktop and mobile devices. The browsers to test will come from the four major browser vendors which were involved in WebAssembly development. Those are Firefox from Mozilla, Chrome from Google, Edge from Microsoft and Safari from Apple. For either of the two data sets a fixed range of tolerances is set to maintain consistency across the diagrams. The values are explained in chapter \ref{ch:benchmark-data}. The other parameter "high quality" can be either switched on or off. The three chart types are explained in chapter \ref{ch:benchmark-cases}.
|
||||
|
||||
\todo[inline]{describe website for results}
|
||||
|
||||
Each section in this chapter describes a set of benchmarks run on the same system. A table in the beginning will indicate the problem dimensions chosen to inspect. After a description of the system and a short summary of the case the results will be presented in the form of graphs. Those are the graphs produced from the application described in chapter \ref{ch:benchmark-app}
|
||||
|
||||
|
||||
\subsection{Case 1 - Windows - wasm vs js}
|
||||
\label{ch:case1}
|
||||
\marginpar{hp pavilion}
|
||||
\marginpar{6 charts}
|
||||
\marginpar{questions: 1, 3, 5}
|
||||
@ -27,9 +30,9 @@ The device is a \textsf{HP Pavilion x360 - 14-ba101ng}\footnote{\path{https://su
|
||||
\input{./results-benchmark/win_chro_simplify_vs_true.tex}
|
||||
\input{./results-benchmark/win_ffox_simplify_vs_true.tex}
|
||||
\input{./results-benchmark/win_edge_simplify_vs_true.tex}
|
||||
\clearpage
|
||||
|
||||
\subsection{Case 2 - Windows - wasm runtime analysis}
|
||||
\label{ch:case2}
|
||||
\marginpar{hp pavilion}
|
||||
\marginpar{2 charts}
|
||||
\marginpar{questions: 2, 3, 5}
|
||||
@ -47,9 +50,9 @@ For this case the same device as in the former case is used. To compare the resu
|
||||
|
||||
\input{./results-benchmark/win_edge_simplify_stack_false.tex}
|
||||
\input{./results-benchmark/win_edge_simplify_stack_true.tex}
|
||||
\clearpage
|
||||
|
||||
\subsection{Case 3 - MacBook Pro - wasm vs js}
|
||||
\label{ch:case3}
|
||||
\marginpar{MacBook Pro 15}
|
||||
\marginpar{4 charts}
|
||||
\marginpar{Chrome and FF comparable to results above}
|
||||
@ -63,14 +66,13 @@ A 2018 MacBook Pro 15" will be used to test the safari browser. For comparison t
|
||||
\caption{Problem dimensions of Case 3}
|
||||
\end{table}
|
||||
|
||||
|
||||
\input{./results-benchmark/mac_safa_bavaria_vs_false.tex}
|
||||
\input{./results-benchmark/mac_ffox_bavaria_vs_false.tex}
|
||||
\input{./results-benchmark/mac_safa_bavaria_vs_true.tex}
|
||||
\input{./results-benchmark/mac_ffox_bavaria_vs_true.tex}
|
||||
\clearpage
|
||||
|
||||
\subsection{Case 4 - Ubuntu - turf.js analysis}
|
||||
\label{ch:case4}
|
||||
\marginpar{Lenovo Miix 510}
|
||||
\marginpar{4 charts}
|
||||
\marginpar{Firefox because orig simplify is faster}
|
||||
@ -86,10 +88,9 @@ A 2018 MacBook Pro 15" will be used to test the safari browser. For comparison t
|
||||
\input{./results-benchmark/ubu_ffox_bavaria_jsstack_false.tex}
|
||||
\input{./results-benchmark/ubu_ffox_bavaria_vs_true.tex}
|
||||
\input{./results-benchmark/ubu_ffox_bavaria_jsstack_true.tex}
|
||||
\clearpage
|
||||
|
||||
\subsection{Case 5 - iPhone - mobile testing}
|
||||
|
||||
\label{ch:case5}
|
||||
\marginpar{iPhone??}
|
||||
\marginpar{6 charts}
|
||||
|
||||
|
@ -1,10 +0,0 @@
|
||||
\section{Discussion}
|
||||
Questions to answer:
|
||||
\begin{itemize}
|
||||
\item Differences between browsers?
|
||||
\item How much influence does the environment change has (storeCoords)?
|
||||
\item What is faster? js or wasm. In which cases?
|
||||
\item Is turf making sense?
|
||||
\item How much influence does high Quality mode has.
|
||||
\item Difference between devices?
|
||||
\end{itemize}
|
42
thesis/chapters/05-discussion.tex
Normal file
42
thesis/chapters/05-discussion.tex
Normal file
@ -0,0 +1,42 @@
|
||||
\section{Discussion}
|
||||
|
||||
In this section the results are interpreted. This section is structured in different questions to answer. First it will be analyzed what the browser differences are. One section will deal with the performance of the pure JavaScript implementations while the next will inspect how Simplify.wasm performs. Then further insights to the performance of the WebAssembly implementation will be given. It will be investigated how long it takes to set up the WebAssembly call and how much time is spent to actually execute the simplification routines. Next the case of Turf.js will be addressed and if its format conversions are reasonable under specific circumstances. Finally the performance of mobile devices will be evaluated.
|
||||
|
||||
\subsection{Browser differences for the JavaScript implementations}
|
||||
|
||||
The first thing to see from the results of chapter \ref{ch:case1} and \ref{ch:case3} is that there is actually a considerable performance difference in the two versions of Simplify.js. So here we take a closer look at the JavaScript performance of the browsers. Interestingly clear winner between the similar algorithms cannot be determined as the performance is inconsistent across browsers. While the original version is faster in Firefox and Safari, the altered version is superior in Chrome and Edge. This is regardless of whether the high quality mode is switched on or not. The difference is however more significant when the preprocessing step is disabled.
|
||||
|
||||
In figure \ref{fig:win_chro_simplify_vs_true} and \ref{fig:win_edge_simplify_vs_true} one can see how similar Chrome and Edge perform with high quality mode enabled. When disabled however the algorithms perform similar in Edge (figure \ref{fig:win_edge_simplify_vs_true}) while in Chrome the alternative version still improves upon the original.
|
||||
|
||||
In Firefox the result is very different. Without the high quality mode the original version performs about 2.5 times better than the alternative. Figure \ref{fig:win_ffox_simplify_vs_false} shows this. When disabling the preprocessing the performance gain is even higher. the original performs constantly 3x faster as seen in figure \ref{fig:win_ffox_simplify_vs_true}.
|
||||
|
||||
The same results can be reproduced under Firefox on macOS with the "Bavarian outline" dataset (figures \ref{fig:mac_ffox_bavaria_vs_false} and\ref{fig:mac_ffox_bavaria_vs_true}). Interestingly under safari the algorithms perform similarly with a small preference to the original version. This applies to either case tested (figures \ref{fig:mac_safa_bavaria_vs_false} and \ref{fig:mac_safa_bavaria_vs_true}).
|
||||
|
||||
With so much variance it is hard to determine the best performing browser regarding the JavaScript implementation. Under the right circumstances Chrome can produce the fastest results with the alternative implementation. Safari is consistently very fast. Even while it falls short to Firefox's results with the original algorithm when high quality is turned on. The greatest discrepancy was produced by Firefox with high quality requested. There the alternate version produced the slowest results while the results with Simplify.js can compete with Chrome's results with the Simplify.js alternative. Edge lies between these two browsers with not too bad but also not the fastest results.
|
||||
|
||||
\subsection{Browser differences for Simplify.wasm}
|
||||
|
||||
So diverse the results from last chapter were, so monotonous they will be here. The performance of the Simplify.wasm function is consistent across all browsers tested. This is a major benefit brought by WebAssembly often described as predictable performance.
|
||||
|
||||
The variance it very low when the preprocessing is turned off through the high quality mode. The browsers produce about the same runtimes under the same conditions. When high quality is off the Chrome browser got its nose ahead with a mean runtime of 0.66ms. Edge follows with 1.02ms and Firefox takes an average 1.10ms. The results of chapter \ref{ch:case3} show that Safari is a bit faster at the high quality mode than Firefox but slower without.
|
||||
|
||||
\subsection{Insights into Simplify.wasm}
|
||||
|
||||
So for when the performance of Simplify.wasm was addressed it meant the time spent for the whole process of preparing memory to running the algorithm in wasm context to loading back the result to JavaScript. This makes sense when comparing it to the JavaScript library with the motive to replace it one for one. It does however not produce meaningful comparisons of WebAssembly performance in contrast to the native JavaScript runtime.
|
||||
|
||||
This is because until Simplify.wasm actually gets to the WebAssembly computation it will take some time to prepare the data and load it back afterwards. To see how long that is the additional chart type "Simplify.wasm runtime insights" was created. In figures \ref{fig:win_edge_simplify_stack_false} and \ref{fig:win_edge_simplify_stack_true} the results can be seen. Each data point contains a stacked column to represent the proportion of time spent for each task. The blue section represents the time spent to initialize the memory, the red one the execution of the compiled WebAssembly code. At last the green part will show the time spent for getting the coordinates back in the right format.
|
||||
|
||||
First the parts where JavaScript is run will be examined. There is as good as no variance in the memory initialization. This is obviously due to the fact that this step is not dependent on any other parameter than the polyline length. Initial versions of the library produced in this thesis were not as efficient in flattening the coordinate array as the final version. By replacing the built-in \texttt{Array.prototype.flat}-method with a simple for loop a good amount optimization was achieved on the JavaScript side of the Simplify.wasm process. The flat method is a rather new feature of ECMAScript and its performance might be enhanced in future browser versions. This example shows however that when writing JavaScript code one can quickly deviate from the "fast path" even when dealing with simple problems.
|
||||
|
||||
On the other side of process lies the function \texttt{loadResult}. It is dependent on the size of the resulting polyline. Since this is often very low in the examples used the green bar can be rarely seen. Merely at low tolerance values like in figure \ref{fig:win_edge_simplify_stack_false} the influence is visible. The maximum fraction there is at tolerance value 0.05 where the operation takes 4.26\% of the total execution time.
|
||||
|
||||
Now when comparing the two graphs one can clearly see that the influence of the JavaScript portions is much greater when the high quality mode is turned of. The time taken for preparing the memory in both cases is about 0.67ms. The execution time of the algorithms is so low in the first case, that it comes down to making up only 24,47\% when taking the median values. In case where high quality is enabled the results do not look as drastic. The median value of execution time is 4.31ms and with that much greater than preparation time. If JavaScript is at advantage in the first case and the high execution time justifies the switch of runtimes in the latter will be examined in the next chapter.
|
||||
|
||||
|
||||
\subsection{Comparison Simplify.wasm vs Simplify.js}
|
||||
\todo[inline]{when is what faster}
|
||||
|
||||
\subsection{Analysis of Turf.js implementation}
|
||||
\todo[inline]{When is turf.js faster}
|
||||
|
||||
\subsection{Mobile device analysis}
|
BIN
thesis/main.pdf
BIN
thesis/main.pdf
Binary file not shown.
@ -5,7 +5,7 @@
|
||||
\usepackage{geometry}
|
||||
\geometry{
|
||||
a4paper,
|
||||
width=150mm,top=15mm,bottom=25mm,
|
||||
margin=25mm,
|
||||
headheight=15pt,
|
||||
includehead,
|
||||
marginparwidth=2cm
|
||||
@ -24,6 +24,8 @@
|
||||
% \rhead{\parbox[t]{0.5\textwidth}{\RaggedLeft\leftmark\strut}}
|
||||
% \setlength{\headheight}{5\baselineskip}
|
||||
\pagestyle{fancy}
|
||||
%\fancyhead[L]{\leftmark}
|
||||
%\fancyhead[R]{\small \rightmark }
|
||||
|
||||
% configure listings
|
||||
\usepackage{listings} % for code highlighting
|
||||
|
Loading…
Reference in New Issue
Block a user