writing
This commit is contained in:
parent
f77616002a
commit
eeb5c01980
@ -1,19 +1,19 @@
|
||||
\section{Discussion}
|
||||
\label{ch:discussion}
|
||||
|
||||
In this section the results are interpreted. This section is structured in different questions to answer. First it will be analyzed what the browser differences are. One section will deal with the performance of the pure JavaScript implementations while the next will inspect how Simplify.wasm performs. Then further insights to the performance of the WebAssembly implementation will be given. It will be investigated how long it takes to set up the WebAssembly call and how much time is spent to actually execute the simplification routines. Next the case of Turf.js will be addressed and if its format conversions are reasonable under specific circumstances. Finally the performance of mobile devices will be evaluated.
|
||||
In this section the results are interpreted. This section is structured in different questions to answer. First it will be analyzed what the browser differences are. One section will deal with the performance of the pure JavaScript implementations while the next will inspect how Simplify.wasm performs. Then further insights to the performance of the WebAssembly implementation will be given. It will be investigated how long it takes to set up the WebAssembly call and how much time is spent to actually execute the simplification routines. Next the case of Turf.js will be addressed and if its format conversions are reasonable under specific circumstances. Finally, the performance of mobile devices will be evaluated.
|
||||
|
||||
\subsection{Browser differences for the JavaScript implementations}
|
||||
|
||||
The first thing to see from the results of chapter \ref{ch:case1} and \ref{ch:case3} is that there is actually a considerable performance difference in the two versions of Simplify.js. So here we take a closer look at the JavaScript performance of the browsers. Interestingly a clear winner between the similar algorithms cannot be determined as the performance is inconsistent across browsers. While the original version is faster in Firefox and Safari, the altered version is superior in Chrome and Edge. This is regardless of whether the high quality mode is switched on or not. The difference is however more significant when the preprocessing step is disabled.
|
||||
The first thing to see from the results of chapter \ref{ch:case1} and \ref{ch:case3} is that there is actually a considerable performance difference in the two versions of Simplify.js. So here we take a closer look at the JavaScript performance of the browsers. Interestingly, a clear winner between the similar algorithms cannot be determined as the performance is inconsistent across browsers. While the original version is faster in Firefox and Safari, the altered version is superior in Chrome and Edge. This is regardless of whether the high quality mode is switched on or not. The difference is however more significant when the preprocessing step is disabled.
|
||||
|
||||
In figure \ref{fig:win_chro_simplify_vs_true} and \ref{fig:win_edge_simplify_vs_true} one can see how similar Chrome and Edge perform with high quality mode enabled. When disabled however, the algorithms perform similar in Edge, while in Chrome the alternative version still improves upon the original.
|
||||
|
||||
In Firefox the result is very different. Without the high quality mode the original version performs about 2.5 times better than the alternative. When disabling the preprocessing the performance gain is even higher. The original performs constantly 3 times faster.
|
||||
In Firefox the result is very different. Without the high quality mode the original version performs about 2.5 times faster than the alternative. When disabling the preprocessing, the performance gain is even higher. The original performs constantly 3 times faster.
|
||||
|
||||
The same results can be reproduced under Firefox on macOS with the "Bavarian outline" dataset. Interestingly under Safari the algorithms perform similarly with a small preference to the original version. This applies to either case tested.
|
||||
The same results can be reproduced under Firefox on macOS with the "Bavarian outline" dataset. Interestingly, under Safari the algorithms perform similarly with a small preference to the original version. This applies to either case tested.
|
||||
|
||||
With so much variance it is hard to determine the best performing browser regarding the JavaScript implementation. Under the right circumstances Chrome can produce the fastest results with the alternative implementation. Safari is consistently very fast. Even while it falls short to Firefox's results with the original algorithm when high quality is turned on. The greatest discrepancy was produced by Firefox with high quality requested. There the alternate version produced the slowest results while the results with Simplify.js can compete with Chrome's results with the Simplify.js alternative. Edge lies between these two browsers with not too bad but also not the fastest results.
|
||||
With so much variance it is hard to determine the best performing browser regarding the JavaScript implementation. Under the right circumstances Chrome can produce the fastest results with the alternative implementation. Safari is consistently very fast. Even while it falls short to Firefox's results with the original algorithm when high quality is turned on. The greatest discrepancy was produced by Firefox with high quality enabled. There the alternate version produced the slowest results while the results with Simplify.js can compete with Chrome's results with the Simplify.js alternative. Edge lies between these two browsers with not too bad, but also not the fastest results.
|
||||
|
||||
\subsection{Browser differences for Simplify.wasm}
|
||||
|
||||
@ -24,31 +24,31 @@ The variance is very low when the preprocessing is turned off through the high q
|
||||
\subsection{Insights into Simplify.wasm}
|
||||
\label{ch:discussion-wasm-insights}
|
||||
|
||||
So far, when the performance of Simplify.wasm was addressed, it meant the time spent for the whole process of preparing memory to running the algorithm in wasm context to loading back the result to JavaScript. This makes sense when comparing it to the JavaScript library with the motive to replace it one by one. It does however not produce meaningful comparisons of WebAssembly performance in contrast to the native JavaScript runtime. Further insights to Simplify.wasm call will be provided here.
|
||||
So far, when the performance of Simplify.wasm was addressed, it meant the time spent for the whole process of preparing memory to running the algorithm as WebAssembly bytecode to loading back the result to JavaScript. This makes sense when comparing it to the JavaScript library with the motive to replace it one by one. It does however not produce meaningful comparisons of WebAssembly performance in contrast to the native JavaScript runtime. Further insights to Simplify.wasm call will be provided here.
|
||||
|
||||
First the parts where JavaScript is run will be examined. Chapter \ref{ch:case2} shows that there is as good as no variance in the memory initialization. This is obviously due to the fact that this step is not dependent on any other parameter than the polyline length. Initial versions of the library produced in this thesis were not as efficient in flattening the coordinate array as the final version. By replacing the built-in \texttt{Array.prototype.flat}-method with a simple \texttt{for} loop, a good optimization was achieved on the JavaScript side of the Simplify.wasm process. The \texttt{flat} method is a rather new feature of ECMAScript and its performance might be enhanced in future browser versions. This example shows that when writing JavaScript code one can quickly deviate from the "fast path" even when dealing with simple problems.
|
||||
First the parts where JavaScript is run will be examined. Chapter \ref{ch:case2} shows that there is as good as no variance in the memory initialization. This is obvious due to the fact that this step is not dependent on any other parameter than the polyline length. Initial versions of the library produced in this thesis were not as efficient in flattening the coordinate array as the final version. By replacing the built-in \texttt{Array.prototype.flat}-method with a simple \texttt{for} loop, a good optimization was achieved on the JavaScript side of the Simplify.wasm process. The \texttt{flat} method is a rather new feature of ECMAScript and its performance might be enhanced in future browser versions. This example shows that when writing JavaScript code one can quickly deviate from the "fast path" even when dealing with simple problems.
|
||||
|
||||
On the other side of the process exists the function \texttt{loadResult}. It is dependent on the size of the resulting polyline. Since this is often very low in the examples used the green bar can be rarely seen. Merely at low tolerance values like in figure \ref{fig:win_edge_simplify_stack_false} the influence is visible. The maximum fraction is at tolerance value 0.05 where the operation takes 4.26\% of the total execution time.
|
||||
On the other side of the process exists the function \texttt{loadResult}. It is dependent on the size of the resulting polyline. Since the result is often very small, the green bar can rarely be seen in figures \ref{fig:win_edge_simplify_stack_false} and \ref{fig:win_edge_simplify_stack_true}. Merely at low tolerance values the influence is visible. The maximum fraction is at tolerance value 0.05 where the operation takes 4.26\% of the total execution time.
|
||||
|
||||
Now when comparing the two graphs one can clearly see that the influence of the JavaScript portions is much greater when the high quality mode is turned off. The time taken for preparing the memory in both cases is about 0.67ms. The execution time of the algorithms is so low in the first case, that it comes down to making up only 24,47\% when taking the median values. In case where high quality is enabled the results do not look as drastic. The median value of execution time is 4.31ms and with that much greater than preparation time. If JavaScript is at an advantage in the first case and the high execution time justifies the switch of runtimes in the latter will be examined in the next chapter.
|
||||
Now when comparing the two graphs one can clearly see that the influence of the JavaScript portions is much greater when the high quality mode is turned off. The time taken for preparing the memory in both cases is about 0.67ms. The execution time of the algorithms is so low in the first case, that it comes down to making up only 24.47\% when taking the median values. In case where high quality is enabled the results do not look as drastic. The median value of execution time is 4.31ms and with that much greater than preparation time. Whether JavaScript is at an advantage in the first case, and the high execution time justifies the switch of runtimes in the latter, will be examined in the next chapter.
|
||||
|
||||
|
||||
\subsection{Comparison Simplify.wasm vs Simplify.js}
|
||||
|
||||
In the results, Simplify.wasm is always faster when the high quality mode is enabled. The Firefox and Chrome browser are able to optimize at least one version of the JavaScript routines in a way that they come close to the performance of the WebAssembly based library. In Edge, the original version is three times, the alternative two times slower. In Safari, both take about twice the time than Simplify.wasm. On the other hand with preprocessing enabled, there is often one JavaScript version that surpasses the Simplify.wasm performance. In Edge and Safari its even both JavaScript versions that perform faster at higher tolerance values.
|
||||
The results from chapter \ref{ch:case1} and \ref{ch:case3} have shown that Simplify.wasm is always faster when the high quality mode is enabled. The Firefox and Chrome browser are able to optimize at least one version of the JavaScript routines in a way that they come close to the performance of the WebAssembly based library. In Edge, the original version is three times, the alternative two times slower. In Safari, both take about twice the time than Simplify.wasm. On the other hand with preprocessing enabled, there is often one JavaScript version that surpasses the Simplify.wasm performance. In Edge and Safari its even both JavaScript versions that perform faster at higher tolerance values.
|
||||
|
||||
This shows that it is not always ideal to replace a library with a WebAssembly based approach. The cost of the overhead might exceed the performance gain when the execution time is low. In section \ref{ch:discussion-wasm-insights} it is pointed out, that the pure execution time of the simplification algorithm is fastest with WebAssembly. When preparing the geodata beforehand, for example by serializing it in a binary representation, one could immediately call the bytecode. This poses further effort regarding memory management to the web developer. One has to weigh up the complexity overhead to the performance benefit when considering such approaches.
|
||||
|
||||
%\subsection{Analysis of Turf.js implementation}
|
||||
\subsection{Analysis of Turf.js implementation}
|
||||
|
||||
In this section the method used by Turf.js is evaluated. As seen when using the Chrome or Edge browser, the original library is the slower JavaScript method for simplification. There the data transformation is definitely unfavorable. In Safari, where the JavaScript versions perform equally, the overhead will still lead to worse run times. Lastly the Firefox browser will be examined. The results from chapter \ref{ch:case4} show that there are indeed cases where the method prevails. These are the ones where the execution time is large enough to justify the overhead. Namely when high quality is enabled or low tolerance values when high quality is disabled.
|
||||
In this section the method used by Turf.js is evaluated. As seen when using the Chrome or Edge browser, the original library is the slower JavaScript method for simplification. There the data transformation is definitely unfavorable. In Safari, where the JavaScript versions perform equally, the overhead will still lead to worse run times. Lastly the Firefox browser will be examined. The results from chapter \ref{ch:case4} show that there are indeed cases where the method from Turf.js achieves better performance than running the alternative Simplify.js library directly. These are the ones where the execution time is large enough to justify the overhead. Namely when high quality is enabled or low tolerance values when high quality is disabled.
|
||||
|
||||
Those conclusions are similar to the Simplify.wasm results, where overhead costs also played a role. Here however, one has to bear in mind that it is only one browser that is affected by a performance gain in certain circumstances. In the three other browsers the method is definitely disadvantageous.
|
||||
|
||||
\subsection{Mobile device analysis}
|
||||
|
||||
Here the results from the mobile device are discussed. It stands out that the two browsers show identical results. This is due to Apple Inc.'s security guidelines concerning the iOS App Store. It is specifically restricted for web browsers to use any other engine than WebKit \parencite[see][section 2.5.6]{apple2019appstore}. Because of this, Firefox for iOS cannot use the Gecko engine developed by Mozilla. This explains why the two browsers perform equally.
|
||||
Here the results from the iPad benchmarks in chapter \ref{ch:case5} are discussed. It stands out that the two browsers show identical results. This is due to Apple Inc.'s security guidelines concerning the iOS App Store. It is specifically restricted for web browsers to use any other engine than WebKit \parencite[see][section 2.5.6]{apple2019appstore}. Because of this, Firefox for iOS cannot use the Gecko engine developed by Mozilla. This explains why the two browsers perform equally.
|
||||
|
||||
The results from the two browsers lead to similar conclusions as the benchmarks of Safari under macOS did. Simplify.wasm is always fastest in high quality mode. With preprocessing, the JavaScript implementations outperform the WebAssembly based approach at higher tolerance ranges. Both JavaScript versions perform equally.
|
||||
|
||||
The mobile device has lower hardware capabilities than the MacBook Pro tested in \ref{ch:case3}. That is why it is not surprising, that the runtimes are higher on the iPad. The concrete results are not directly comparable as different data sets were used. So the focus lies on the behavior of the algorithms. In the case of high quality enabled, the algorithms take about twice as long on the mobile device. This affects both, JavaScript and WebAssembly, equally. With high quality disabled, one can see that the JavaScript performance of the desktop device gets gradually better where the at the mobile device the performance stagnates.
|
||||
The mobile device has lower hardware capabilities than the MacBook Pro tested in \ref{ch:case3}. That is why it is not surprising, that the runtimes are higher on the iPad. The concrete results of chapter \ref{ch:case3} and \ref{ch:case5} are not directly comparable as different data sets were used. So the focus lies on the behavior of the algorithms. In the case of high quality enabled, the algorithms take about twice as long on the mobile device. This affects both, JavaScript and WebAssembly, equally. With high quality disabled, one can see that the JavaScript performance of the desktop device gets gradually better where at the mobile device the performance stagnates.
|
@ -2,9 +2,9 @@
|
||||
|
||||
%In this section a conclusion is drawn. First the results will be shortly summarized. The work done will be reflected and possible improvements are suggested. At last there will be an prospect about future work.
|
||||
|
||||
In this thesis, the performance of simplification algorithms in the context of web applications was analyzed. The dominant library for this task in the JavaScript ecosystem is Simplify.js. It implements the Douglas-Peucker algorithm with optional radial distance preprocessing. By using a technology called WebAssembly, this library was recreated with the goal to achieve a better performance. This recreation was called Simplify.wasm. Also a JavaScript alternative to Simplify.js was tested that operates on a different representation of polylines. To perform several benchmarks on different devices a website was built. The results were gained by using the library Benchmark.js which produces statistically relevant benchmarks.
|
||||
In this thesis, the performance of simplification algorithms in the context of web applications was analyzed. The dominant library for this task in the JavaScript ecosystem is Simplify.js. It implements the Douglas-Peucker algorithm with optional radial distance preprocessing. By using a technology called WebAssembly, this library was recreated with the goal to achieve a better performance. This recreation was called Simplify.wasm. Also a JavaScript alternative to Simplify.js was tested that operates on a different representation of polylines. To perform several benchmarks on different devices a website was built. The results were gathered by using the library Benchmark.js which produces statistically relevant benchmarks.
|
||||
|
||||
It was shown that the WebAssembly based library showed more stable results across different web browsers. The performance of the JavaScript based ones varied greatly. Not only did the absolute run times vary. There were also differences in which variant was the faster one. Generally it can be said that the complexity of the operation defines if Simplify.wasm is preferable to Simplify.js. This comes from the fact that there is an overhead of calling Simplify.wasm. To call the WebAssembly code the coordinates will first have to be stored in a linear memory object. With short run times this overhead can exceed the performance gain through WebAssembly. The pure algorithm run time was always shorter with WebAssembly.
|
||||
It was shown that the WebAssembly based library showed more stable results across different web browsers. The performance of the JavaScript based ones varied greatly. Not only did the absolute run times vary. There were also differences in which JavaScript variant was the faster one. Generally it can be said that the complexity of the operation defines if Simplify.wasm is preferable to Simplify.js. This comes from the fact that there is an overhead of calling Simplify.wasm. To call the WebAssembly code the coordinates will first have to be stored in a linear memory object. With short run times this overhead can exceed the performance gain through WebAssembly. The pure algorithm run time was always shorter with WebAssembly.
|
||||
|
||||
The alternative Simplify.js version was created because another major library, Turf.js, implemented an odd routine for simplification. To call Simplify.js the data format of the polyline was transformed back and forth. It could be shown that this process has negative impact to performance in most browsers. Merely one browser showed faster runtimes with this method when the run time of the algorithm was high.
|
||||
|
||||
@ -13,17 +13,17 @@ The integration of a WebAssembly module requires more effort than a JavaScript o
|
||||
\subsection{Improvements and future work}
|
||||
|
||||
% file size
|
||||
The library created in this thesis can be improved in a few aspects. First, there is the excessive file size produced by the Emscripten compiler. Section \ref{ch:file-sizes} already mentions this issue. A solution is proposed to reduce the size of the byte code to about 500 bytes gzipped. This optimization is achieved by not using standard library functions. Only then will the library be contestable to the JavaScript original in this regard.
|
||||
The library created in this thesis can be improved in a few aspects. First, there is the excessive file size produced by the Emscripten compiler. Section \ref{ch:file-sizes} already mentions this issue. A solution is proposed to reduce the size of the byte code to about 500 bytes using gzip. This optimization is achieved by not using standard library functions. Only then will the library be contestable to the JavaScript original in this regard.
|
||||
|
||||
% memory management
|
||||
Another improvement can be made by changing the abstractions implemented in JavaScript. These were constructed with the goal to achieve a similar experience to Simplify.js. The whole memory management is encapsulated in these abstractions. Each call leads to allocating and freeing the memory for the polyline. One could provide a better interface to the memory management where the user of the library can preload a polyline and execute the algorithm on the prepared memory. Another approach could be to make use of serialized geodata. Whole feature sets could be represented in a binary encoding and simplified in one WebAssembly call.
|
||||
|
||||
% three dimensions
|
||||
The geodata types mentioned in this thesis, namely GeoJSON and TopJSON, allow for three dimensional coordinates. This third value often represents altitude. The library Simplify.js provides alternate source code to operate on those types of coordinates. The library created here did not implement a solution for them. If provided, Simplify.wasm will ignore the third coordinate value and run the algorithm on the two dimensional polyline. The functionality could be extended to support calculations on three dimensional positions.
|
||||
The geodata types mentioned in this thesis, namely GeoJSON and TopJSON, allow for three dimensional coordinates. The third value often represents altitude. The library Simplify.js provides alternate source code to operate on those types of coordinates. The library created here did not implement a solution for them. If provided, Simplify.wasm will ignore the third coordinate value and run the algorithm on the two dimensional polyline. The functionality could be extended to support calculations on three dimensional positions.
|
||||
|
||||
% psimpl.h
|
||||
As mentioned WebAssembly gives the ability to bring code from other programming languages to the web. A library was found that implements several different simplification algorithms in C++. This library can be compiled to WebAssembly. A successful build was developed in the early stages of this thesis. The outcome was not as appropriate for a performance analysis as the direct port of the JavaScript library. In a future work however, this ported library can be used for quality analysis of the different algorithms.
|
||||
As mentioned, WebAssembly gives the ability to bring code from other programming languages to the web. A library was found that implements several different simplification algorithms in C++. This library can be compiled to WebAssembly. A successful build was developed in the early stages of this thesis. The outcome was not as appropriate for a performance analysis as the direct port of the JavaScript library. In a future work however, this ported library can be used for quality analysis of the different algorithms.
|
||||
|
||||
% compare with native
|
||||
The main goal projects like WebAssembly is to bring the web platform up to speed with native applications. Especially in the beginning of JavaScript the code that could run in web browsers was slow compared to those. Since then JavaScript engines have evolved and brought huge performance gains, for example by just-in-time compilation. WebAssembly could be a way to reduce the gap to native execution even further. It will be interesting to see how much the cost of running a virtual machine in the browser really is. The code from Simplify.wasm can easily be compiled by general C compilers. A comparison of the native execution to the results from this thesis would be interesting.
|
||||
The main goal of projects like WebAssembly is to bring the web platform up to speed with native applications. Especially in the beginning of JavaScript the code that could run in web browsers was slow compared to those. Since then JavaScript engines have evolved and brought huge performance gains, for example by just-in-time compilation. WebAssembly could be a way to reduce the gap to native execution even further. It will be interesting to see how much the cost of running a virtual machine in the browser really is. The code from Simplify.wasm can easily be compiled by general C compilers. A comparison of the native execution to the results from this thesis would be interesting.
|
||||
|
||||
|
BIN
thesis/main.pdf
BIN
thesis/main.pdf
Binary file not shown.
@ -56,12 +56,18 @@
|
||||
\input{titlepage.tex}
|
||||
|
||||
\section*{Abstract}
|
||||
In this thesis the performance of polyline simplification in web browsers is observed. Based on the JavaScript library Simplify.js a WebAssembly solution is built to increase performance. The solutions implement the Douglas-Peucker polyline simplification algorithm with optional radial distance preprocessing. The format for polylines that Simplify.js expects differs from the representation used in major geodata formats. This discrepancy is obvious in another JavaScript library, Turf.js, where it is overcome by format transformations on each call. A slight variant of Simplify.js is proposed in this paper that can operate directly on the format used in GeoJSON and TopoJSON. The three approaches, Simplify.js, Simplify.js variant and Simplify.wasm are compared across different browsers by creating a web page, that gathers various benchmarking metrics. It is concluded that WebAssembly performance alone supersedes JavaScript performance. A drop-in replacement that includes memory management however bears overhead that can outweigh the performance gain. To fully utilize WebAssembly performance more effort regarding memory management is brought to web development. It is shown that the method used by Turf.js is unfavorable in most cases. Merely one browser shows a performance gain under special circumstances. In the other browsers the use of the Simplify.js variant is preferable.
|
||||
In this thesis the performance of polyline simplification in web browsers is evaluated. Based on the JavaScript library Simplify.js a WebAssembly solution is built to increase performance. The solutions implement the Douglas-Peucker polyline simplification algorithm with optional radial distance preprocessing. The format for polylines that Simplify.js expects differs from the representation used in major geodata formats. This discrepancy is obvious in another JavaScript library, Turf.js, where it is overcome by format transformations on each call. A slight variant of Simplify.js is proposed in this thesis that can operate directly on the format used in GeoJSON and TopoJSON. The three approaches, Simplify.js, Simplify.js variant and Simplify.wasm are compared across different browsers by creating a web page, that gathers various benchmarking metrics. It is concluded that WebAssembly performance alone supersedes JavaScript performance. A drop-in replacement that includes memory management however bears overhead that can outweigh the performance gain. To fully utilize WebAssembly performance more effort regarding memory management is brought to web development. It is shown that the method used by Turf.js is unfavorable in most cases. Merely one browser shows a performance gain under special circumstances. In the other browsers the use of the Simplify.js variant is preferable.
|
||||
|
||||
\newpage
|
||||
|
||||
\tableofcontents
|
||||
\newpage
|
||||
\listoffigures
|
||||
\newpage
|
||||
\listoftables
|
||||
\newpage
|
||||
\lstlistoflistings
|
||||
\newpage
|
||||
|
||||
\setcounter{page}{1}
|
||||
\pagenumbering{arabic}
|
||||
@ -70,12 +76,6 @@ In this thesis the performance of polyline simplification in web browsers is obs
|
||||
|
||||
\clearpage
|
||||
\pagenumbering{gobble}
|
||||
\listoffigures
|
||||
\newpage
|
||||
\listoftables
|
||||
\newpage
|
||||
\lstlistoflistings
|
||||
\newpage
|
||||
\printbibliography
|
||||
\newpage
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user