This commit is contained in:
Alfred Melch 2019-08-25 11:33:36 +02:00
parent 8da2339aaa
commit f77616002a
4 changed files with 9 additions and 8 deletions

View File

@ -24,7 +24,7 @@ The variance is very low when the preprocessing is turned off through the high q
\subsection{Insights into Simplify.wasm}
\label{ch:discussion-wasm-insights}
So far when the performance of Simplify.wasm was addressed, it meant the time spent for the whole process of preparing memory to running the algorithm in wasm context to loading back the result to JavaScript. This makes sense when comparing it to the JavaScript library with the motive to replace it one by one. It does however not produce meaningful comparisons of WebAssembly performance in contrast to the native JavaScript runtime. Further insights to Simplify.wasm call will be provided here.
So far, when the performance of Simplify.wasm was addressed, it meant the time spent for the whole process of preparing memory to running the algorithm in wasm context to loading back the result to JavaScript. This makes sense when comparing it to the JavaScript library with the motive to replace it one by one. It does however not produce meaningful comparisons of WebAssembly performance in contrast to the native JavaScript runtime. Further insights to Simplify.wasm call will be provided here.
First the parts where JavaScript is run will be examined. Chapter \ref{ch:case2} shows that there is as good as no variance in the memory initialization. This is obviously due to the fact that this step is not dependent on any other parameter than the polyline length. Initial versions of the library produced in this thesis were not as efficient in flattening the coordinate array as the final version. By replacing the built-in \texttt{Array.prototype.flat}-method with a simple \texttt{for} loop, a good optimization was achieved on the JavaScript side of the Simplify.wasm process. The \texttt{flat} method is a rather new feature of ECMAScript and its performance might be enhanced in future browser versions. This example shows that when writing JavaScript code one can quickly deviate from the "fast path" even when dealing with simple problems.
@ -35,19 +35,19 @@ Now when comparing the two graphs one can clearly see that the influence of the
\subsection{Comparison Simplify.wasm vs Simplify.js}
In the results, Simplify.wasm is always faster when the high quality mode is enabled. The Firefox and Chrome browser are able to optimize at least one version of the JavaScript routines in a way that they come close to the performance of the WebAssembly based library. In Edge the original version is three times, the alternative two times slower. In Safari both take about twice the time than Simplify.wasm. On the other hand with preprocessing enabled there is often one JavaScript version that surpasses the Simplify.wasm performance. In Edge and Safari its even both JavaScript versions that perform faster at higher tolerance values.
In the results, Simplify.wasm is always faster when the high quality mode is enabled. The Firefox and Chrome browser are able to optimize at least one version of the JavaScript routines in a way that they come close to the performance of the WebAssembly based library. In Edge, the original version is three times, the alternative two times slower. In Safari, both take about twice the time than Simplify.wasm. On the other hand with preprocessing enabled, there is often one JavaScript version that surpasses the Simplify.wasm performance. In Edge and Safari its even both JavaScript versions that perform faster at higher tolerance values.
This shows that it is not always ideal to replace a library with a WebAssembly based approach. The cost of the overhead might exceed the performance gain when the execution time is low. In section \ref{ch:discussion-wasm-insights} it is pointed out, that the pure execution time of the simplification algorithm is fastest with WebAssembly. When preparing the geodata beforehand, for example by serializing it in a binary representation, one could immediately call the bytecode. This poses further effort regarding memory management to the web developer. One has to weigh up the complexity overhead to the performance benefit when considering such approaches.
%\subsection{Analysis of Turf.js implementation}
In this section the method used by Turf.js is evaluated. As seen when using the Chrome or Edge browser, the original library is the slower JavaScript method for simplification. There the data transformation is absolutely unfavorable. In Safari, where the JavaScript versions perform equally, the overhead will still lead to worse run times. Lastly the Firefox browser will be examined. The results from chapter \ref{ch:case4} show that there are indeed cases where the method prevails. These are the ones where the execution time is large enough to justify the overhead. Namely when high quality is enabled or low tolerance values when high quality is disabled.
In this section the method used by Turf.js is evaluated. As seen when using the Chrome or Edge browser, the original library is the slower JavaScript method for simplification. There the data transformation is definitely unfavorable. In Safari, where the JavaScript versions perform equally, the overhead will still lead to worse run times. Lastly the Firefox browser will be examined. The results from chapter \ref{ch:case4} show that there are indeed cases where the method prevails. These are the ones where the execution time is large enough to justify the overhead. Namely when high quality is enabled or low tolerance values when high quality is disabled.
Those conclusions are similar to the Simplify.wasm results, where overhead costs also played a role. Here however one has to bear in mind that it is only one browser that is affected by a performance gain in certain circumstances. In the three other browsers the method is definitely disadvantageous.
Those conclusions are similar to the Simplify.wasm results, where overhead costs also played a role. Here however, one has to bear in mind that it is only one browser that is affected by a performance gain in certain circumstances. In the three other browsers the method is definitely disadvantageous.
\subsection{Mobile device analysis}
Here the results from the mobile device are discussed. It stands out that the two browsers show identical results. This is due to Apple Inc.'s security guidelines concerning the iOS App Store. It is specifically restricted for web browsers to use another engine than WebKit \parencite[see][section 2.5.6]{apple2019appstore}. Because of this, Firefox for iOS cannot use the Gecko engine developed by Mozilla. This explains why the two browsers perform equally.
Here the results from the mobile device are discussed. It stands out that the two browsers show identical results. This is due to Apple Inc.'s security guidelines concerning the iOS App Store. It is specifically restricted for web browsers to use any other engine than WebKit \parencite[see][section 2.5.6]{apple2019appstore}. Because of this, Firefox for iOS cannot use the Gecko engine developed by Mozilla. This explains why the two browsers perform equally.
The results from the two browsers lead to similar conclusions as the benchmarks of Safari under macOS did. Simplify.wasm is always fastest in high quality mode. With preprocessing, the JavaScript implementations outperform the WebAssembly based approach at higher tolerance ranges. Both JavaScript versions perform equally.

View File

@ -2,7 +2,7 @@
%In this section a conclusion is drawn. First the results will be shortly summarized. The work done will be reflected and possible improvements are suggested. At last there will be an prospect about future work.
In this thesis the performance of simplification algorithms in the context of web applications was analyzed. The dominant library for this task in the JavaScript ecosystem is Simplify.js. It implements the Douglas-Peucker algorithm with optional radial distance preprocessing. By using a technology called WebAssembly this library was recreated with the goal to achieve a better performance. This recreation was called Simplify.wasm. Also a JavaScript alternative to Simplify.js was tested that operates on a different representation of polylines. To perform several benchmarks on different devices a website was built. The results were gained by using the library Benchmark.js which produces statistically relevant benchmarks.
In this thesis, the performance of simplification algorithms in the context of web applications was analyzed. The dominant library for this task in the JavaScript ecosystem is Simplify.js. It implements the Douglas-Peucker algorithm with optional radial distance preprocessing. By using a technology called WebAssembly, this library was recreated with the goal to achieve a better performance. This recreation was called Simplify.wasm. Also a JavaScript alternative to Simplify.js was tested that operates on a different representation of polylines. To perform several benchmarks on different devices a website was built. The results were gained by using the library Benchmark.js which produces statistically relevant benchmarks.
It was shown that the WebAssembly based library showed more stable results across different web browsers. The performance of the JavaScript based ones varied greatly. Not only did the absolute run times vary. There were also differences in which variant was the faster one. Generally it can be said that the complexity of the operation defines if Simplify.wasm is preferable to Simplify.js. This comes from the fact that there is an overhead of calling Simplify.wasm. To call the WebAssembly code the coordinates will first have to be stored in a linear memory object. With short run times this overhead can exceed the performance gain through WebAssembly. The pure algorithm run time was always shorter with WebAssembly.
@ -19,11 +19,11 @@ The library created in this thesis can be improved in a few aspects. First, ther
Another improvement can be made by changing the abstractions implemented in JavaScript. These were constructed with the goal to achieve a similar experience to Simplify.js. The whole memory management is encapsulated in these abstractions. Each call leads to allocating and freeing the memory for the polyline. One could provide a better interface to the memory management where the user of the library can preload a polyline and execute the algorithm on the prepared memory. Another approach could be to make use of serialized geodata. Whole feature sets could be represented in a binary encoding and simplified in one WebAssembly call.
% three dimensions
The geodata types mentioned in this thesis, namely GeoJSON and TopJSON, allow for three dimensional coordinates. This third coordinate value often represents altitude. The library Simplify.js provides alternate source code to operate on those types of coordinates. The library created here did not implement a solution for them. If provided Simplify.wasm will ignore the third coordinate value and run the algorithm on the two dimensional polyline. The functionality could be extended to support calculations on three dimensional positions.
The geodata types mentioned in this thesis, namely GeoJSON and TopJSON, allow for three dimensional coordinates. This third value often represents altitude. The library Simplify.js provides alternate source code to operate on those types of coordinates. The library created here did not implement a solution for them. If provided, Simplify.wasm will ignore the third coordinate value and run the algorithm on the two dimensional polyline. The functionality could be extended to support calculations on three dimensional positions.
% psimpl.h
As mentioned WebAssembly gives the ability to bring code from other programming languages to the web. A library was found that implements several different simplification algorithms in C++. This library can be compiled to WebAssembly. A successful build was developed in the early stages of this thesis. The outcome was not as appropriate for a performance analysis as the direct port of the JavaScript library. In a future work however, this ported library can be used for quality analysis of the different algorithms.
% compare with native
The main goal projects like WebAssembly is to bring the web platform up to speed. Especially in the beginning of JavaScript the code that could run in web browsers was slow compared to the speed of native applications. Since then JavaScript engines have evolved and brought huge performance gains by just-in-time compilation. WebAssembly could be a way to reduce the gap to native execution even further. It will be interesting to see how much the cost of running a virtual machine in the browser really is. The code from Simplify.wasm can easily be compiled by general C compilers. A comparison of the native execution to the results from this thesis would be interesting.
The main goal projects like WebAssembly is to bring the web platform up to speed with native applications. Especially in the beginning of JavaScript the code that could run in web browsers was slow compared to those. Since then JavaScript engines have evolved and brought huge performance gains, for example by just-in-time compilation. WebAssembly could be a way to reduce the gap to native execution even further. It will be interesting to see how much the cost of running a virtual machine in the browser really is. The code from Simplify.wasm can easily be compiled by general C compilers. A comparison of the native execution to the results from this thesis would be interesting.

Binary file not shown.

View File

@ -5,6 +5,7 @@
\usepackage{geometry}
\geometry{
a4paper,
% top=25mm,left=30mm,right=40mm,bottom=20mm,
margin=25mm,
headheight=15pt,
includehead,