writing
This commit is contained in:
parent
a2b3885ec5
commit
460dd7983b
@ -1,24 +1,11 @@
|
||||
\input{chapters/01-introduction.tex}
|
||||
\newpage
|
||||
|
||||
\input{chapters/02-principles.tex}
|
||||
\newpage
|
||||
|
||||
\input{chapters/03-algorithms.tex}
|
||||
\newpage
|
||||
|
||||
\input{chapters/04-webassembly.tex}
|
||||
\newpage
|
||||
|
||||
\input{chapters/05-benchmarking.tex}
|
||||
\newpage
|
||||
|
||||
\input{chapters/06-integration.tex}
|
||||
\newpage
|
||||
|
||||
\input{chapters/07-results.tex}
|
||||
\newpage
|
||||
|
||||
\input{chapters/08-conclusion.tex}
|
||||
\newpage
|
||||
|
||||
\input{chapters/02.00-theory.tex}
|
||||
\input{chapters/02.01-Generalization.tex}
|
||||
\input{chapters/02.02-Dataformats.tex}
|
||||
\input{chapters/02.03-Algorithms.tex}
|
||||
\input{chapters/02.04-webruntime.tex}
|
||||
\input{chapters/03.00-methodology.tex}
|
||||
\input{chapters/03.01-benchmark.tex}
|
||||
\input{chapters/03.02-integration.tex}
|
||||
\input{chapters/04-results.tex}
|
||||
\input{chapters/05-conclusion.tex}
|
||||
|
@ -1,36 +0,0 @@
|
||||
\section{Principles}
|
||||
% Principles
|
||||
|
||||
\subsection{Polygon basics}
|
||||
|
||||
\subsubsection{Topological aspects}
|
||||
|
||||
|
||||
\subsection{LineString simplification}
|
||||
|
||||
\subsubsection{Positional errors}
|
||||
|
||||
\subsubsection{Length errors}
|
||||
|
||||
\subsubsection{Area Errors}
|
||||
|
||||
|
||||
\subsection{Runtimes on the Web}
|
||||
|
||||
\subsubsection{Webassembly}
|
||||
|
||||
|
||||
\subsection{Coordinate representation}
|
||||
|
||||
\paragraph{in Javascript}
|
||||
|
||||
\paragraph{in C}
|
||||
|
||||
\paragraph{in C++}
|
||||
|
||||
|
||||
\subsection{Data Formats}
|
||||
|
||||
\subsection{GeoJSON}
|
||||
|
||||
\subsection{TopoJSON}
|
4
thesis/chapters/02.00-theory.tex
Normal file
4
thesis/chapters/02.00-theory.tex
Normal file
@ -0,0 +1,4 @@
|
||||
\section{Theory}
|
||||
% Principles
|
||||
|
||||
In this chapter the theory behind polygon simplification will be explained. The simplification process is part of generalization in cartography. So first a few words about it will be dropped do give a broad overview about the topic. It will be clarified which goals drive the reducing of data quantity, especially in the context of web applications. Then the data formats will be explained that make up the data. From there a closer look can be taken how the simplification can be accomplished.
|
7
thesis/chapters/02.01-Generalization.tex
Normal file
7
thesis/chapters/02.01-Generalization.tex
Normal file
@ -0,0 +1,7 @@
|
||||
|
||||
\subsection{Generalization in cartography}
|
||||
|
||||
\subsubsection{Goals of reducing data}
|
||||
|
||||
|
||||
\subsubsection{Automated generalization}
|
52
thesis/chapters/02.02-Dataformats.tex
Normal file
52
thesis/chapters/02.02-Dataformats.tex
Normal file
@ -0,0 +1,52 @@
|
||||
|
||||
\subsection{Geodata formats on the Web}
|
||||
|
||||
Here the data formats that are used through this theses will be explained.
|
||||
|
||||
|
||||
\paragraph{The JavaScript Object Notation (JSON) Data Interchange Format} was derived from the ECMAScript Programming Language Standard\footnote{\path{https://tools.ietf.org/html/rfc8259}}. It is a text format for the serialization of structured data. As a text format is suites well for the data exchange between server and client. Also it can easily be consumed by JavaScript. These characteristics are ideal for web based applications. It does however only support a limited number of data types. Four primitive ones (string, number, boolean and null) and two structured ones (objects and array). Objects are an unordered collection of name-value pairs, while arrays are simply ordered lists of values. JSON was meant as a replacement for XML as it provides a more human readable format. Through nesting complex data structures can be created.
|
||||
|
||||
\paragraph{The GeoJSON Format} is a geospatial data interchange format\footnote{\path{https://tools.ietf.org/html/rfc7946}}. As the name suggests it is based on JSON and deals with data representing geographic features. There are several geometry types defined to be compatible with the types in the OpenGIS Simple Features Implementation Specification for SQL\footnote{\path{https://portal.opengeospatial.org/files/?artifact_id=829}}. These are Point, MultiPoint, LineString, MultiLineString, Polygon, Multipolygon and the heterogeneous GeometryCollection. Listing \ref{lst:geojson-example} shows a simple example of a GeoJSON object with one point feature. A more complete example can be viewed in the file \path{./data/example-7946.geojson}.
|
||||
|
||||
\lstinputlisting[
|
||||
float=!htb,
|
||||
language=javascript,
|
||||
caption=An example for a GeoJSON object,
|
||||
label=lst:geojson-example
|
||||
]{../data/example-simple.geojson}
|
||||
|
||||
|
||||
\todo[inline]{Explain the nested array coordinates of polygons}
|
||||
|
||||
GeoJSON is mainly used for web-based mapping. Since it is based on JSON it inherits its strength. There is for one the enhanced readability through reduced markup overhead compared to XML-based data types like GML. Interoperability with web applications comes for free since the parsing of JSON-objects is integrated in JavaScript. Unlike the Esri Shapefile format a single file is sufficient to store and transmit all relevant data, including feature properties.
|
||||
|
||||
To its downsides count that a text based cannot store the geometries as efficiently as it would be possible with a binary format. Also only vector-based data types can be represented. Another disadvantage can be the strictly non-topologic approach. Every feature is completely described by one entry. However when there are features that share common components, like boundaries in neighboring polygons, these data points will be encoded twice in the GeoJSON object. On the one hand this further poses concerns about data size. On the other hand it is more difficult to execute topological analysis on the data set. Luckily there is a related data structure to tackle this problem.
|
||||
|
||||
\todo[inline]{Extract more info about topology: https://www.esri.com/news/arcuser/0401/topo.html}
|
||||
|
||||
|
||||
\paragraph{TopoJSON} is an extension of GeoJSON and aims to encode datastructures into a shared topology\footnote{\path{https://github.com/topojson/topojson-specification}}. It supports the same geometry types as GeoJSON. It differs in some additional properties to use and new object types like "Topology" and "GeometryCollection". Its main feature is that LineStrings, Polygons and their \todo{find better word}multiplicary equivalents must define line segments in a common property called "arcs". The geometries themselves then reference the arcs from with they are made up. This reduces redundancy of data points. Another feature is the quantization of positions. To use it one can define a "transform" object which specifies a scale and translate point to encode all coordinates. Together with delta-encoding of position arrays one obtains integer values better suited for efficient serialization and reduced file size.
|
||||
|
||||
\todo[inline]{Explain why topology-preserving shape simplification is important}
|
||||
|
||||
\paragraph{Coordinate representation} Both GeoJSON and TopoJSON represent positions as an array of numbers. The elements depict longitude, latitude and optionally altitude in that order. For simplicity this thesis will deal with two-dimensional positions only. A polyline is described by creating an array of these positions as seen in listing \ref{lst:coordinates-array}.
|
||||
|
||||
\begin{lstlisting}[
|
||||
float=htb,
|
||||
label=lst:coordinates-array,
|
||||
caption=Polyline coordinates in nested-array form
|
||||
]
|
||||
[[102.0, 0.0], [103.0, 1.0], [104.0, 0.0], [105.0, 1.0]]
|
||||
\end{lstlisting}
|
||||
|
||||
There will be however one library in this thesis which expects coordinates in a different format. Listing \ref{lst:coordinates-object} shows a polyline in the sense of this library. Here one location is represented by an object with x and y properties.
|
||||
|
||||
\begin{lstlisting}[
|
||||
float=htb,
|
||||
label=lst:coordinates-object,
|
||||
caption=Polyline in array-of-objects form
|
||||
]
|
||||
[{x: 102.0, y: 0.0}, {x: 103.0, y: 1.0}, {x: 104.0, y: 0.0}, {x: 105.0, y: 1.0}]
|
||||
\end{lstlisting}
|
||||
|
||||
To distinguish these formats in future references the first first format will be called nested-array format, while the latter will be called array-of-objects format.
|
39
thesis/chapters/02.03-Algorithms.tex
Normal file
39
thesis/chapters/02.03-Algorithms.tex
Normal file
@ -0,0 +1,39 @@
|
||||
|
||||
\subsection{Polyline simplification}
|
||||
|
||||
\subsubsection{Heuristics}
|
||||
|
||||
\paragraph{Positional errors}
|
||||
|
||||
\paragraph{Length errors}
|
||||
|
||||
\paragraph{Area Errors}
|
||||
|
||||
\subsubsection{Algorithms}
|
||||
Compression algorithms.
|
||||
|
||||
\paragraph{n-th point algorithm}
|
||||
|
||||
\paragraph{Random-point routine}
|
||||
|
||||
\paragraph{Radial distance algorithm}
|
||||
|
||||
\paragraph{Perpendicular distance algorithm}
|
||||
|
||||
\paragraph{Reumann-Witkam simplification}
|
||||
|
||||
\paragraph{Opheim simplification}
|
||||
|
||||
\paragraph{Lang simplification}
|
||||
|
||||
\paragraph{Douglas-Peucker simplification}
|
||||
|
||||
\paragraph{with reduction parameter}
|
||||
|
||||
\paragraph{Jenks simplification}
|
||||
|
||||
\paragraph{Visvalingam-Whyatt simplification}
|
||||
|
||||
\paragraph{Zhao-Saalfeld simplification}
|
||||
|
||||
\subsubsection{Summary}
|
@ -1,26 +1,26 @@
|
||||
\section[WebAssembly]{Running the algorithms on the web platform}
|
||||
|
||||
% The Web Platform
|
||||
\subsection[Web runtimes]{Running the algorithms on the web platform}
|
||||
|
||||
JavaScript has been the only native programming language of web browsers for a long time. With the development of WebAssembly there seems to be an alternative on its way with high promises. This technology and the benefits and drawbacks to it will be explained in this chapter. It will be used to execute the algorithms under inspection in this thesis.
|
||||
|
||||
|
||||
|
||||
\subsection{Introduction to Webassembly}
|
||||
\subsubsection{Introduction to Webassembly}
|
||||
|
||||
\todo[inline]{Present WebAssembly}
|
||||
|
||||
\subsubsection{Existing compilers}
|
||||
\paragraph{Existing compilers}
|
||||
|
||||
\todo[inline]{Languages from which to compile}
|
||||
\todo[inline]{emscripten}
|
||||
\todo[inline]{assemblyscript}
|
||||
\todo[inline]{rust}
|
||||
|
||||
\subsubsection{Technical hurdles}
|
||||
\paragraph{Technical hurdles}
|
||||
|
||||
\todo[inline]{Managing memory}
|
||||
\todo[inline]{passing arrays}
|
||||
|
||||
\subsubsection{Benefits of WebAssembly}
|
||||
\paragraph{Benefits of WebAssembly}
|
||||
|
||||
Why are people going through the hassle of bringing machine code to a platform with a working scripting engine. Is javascript really that aweful. It is often stated that WebAssembly can bring performance benefits. It makes sense that statically typed machine code beats scripting languages performance wise. It has to be observed however if the overhead of switching contexts will neglect this performance gain. Javascript has made a lot of performance improvements over the past years. Not at least Googles development on the V8 engine has brought Javascript to an acceptable speed for extensive calculations. The engine observes the execution of running javascript code and will perform optimizations that can be compared to optimizations of compilers.
|
||||
|
||||
@ -34,7 +34,7 @@ The javascript ecosystem has rapidly evolved the past years. Thanks to package m
|
||||
To summarize the two main benefits that are expected from WebAssembly are perfomance and integration. In this thesis these two benefits will be tested.
|
||||
|
||||
|
||||
\subsection{Two test cases - performance and integration}
|
||||
\paragraph{Two test cases - performance and integration}
|
||||
|
||||
The benefits that WebAssembly promises shall be tested in two seperate Webpages. One for the performance measurements and one to test the integration of existing libraries.
|
||||
|
@ -1,28 +0,0 @@
|
||||
\section{Algorithms}
|
||||
Compression algorithms.
|
||||
|
||||
\subsection{n-th point algorithm}
|
||||
|
||||
\subsection{Random-point routine}
|
||||
|
||||
\subsection{Radial distance algorithm}
|
||||
|
||||
\subsection{Perpendicular distance algorithm}
|
||||
|
||||
\subsection{Reumann-Witkam simplification}
|
||||
|
||||
\subsection{Opheim simplification}
|
||||
|
||||
\subsection{Lang simplification}
|
||||
|
||||
\subsection{Douglas-Peucker simplification}
|
||||
|
||||
\subsubsection{with reduction parameter}
|
||||
|
||||
\subsection{Jenks simplification}
|
||||
|
||||
\subsection{Visvalingam-Whyatt simplification}
|
||||
|
||||
\subsection{Zhao-Saalfeld simplification}
|
||||
|
||||
\subsection{Summary}
|
1
thesis/chapters/03.00-methodology.tex
Normal file
1
thesis/chapters/03.00-methodology.tex
Normal file
@ -0,0 +1 @@
|
||||
\section{Methodology}
|
@ -1,10 +1,10 @@
|
||||
\section[Benchmark]{Implementation of a performance benchmark}
|
||||
\subsection{Implementation of a performance benchmark}
|
||||
|
||||
% Performance benchmark
|
||||
|
||||
In this chapter I will explain the approach to improve the performance of a simplification algorithm in a web browser via WebAssembly. The go-to library for this kind of operation is Simplify.js. It is the JavaScript implementation of the Douglas-Peucker algorithm with optional radial distance preprocessing. The library will be rebuilt in the C programming language and compiled to WebAssembly with Emscripten. A web page is built to produce benchmarking insights to compare the two approaches performance wise.
|
||||
|
||||
\subsection{State of the art: Simplify.js}
|
||||
\subsubsection{State of the art: Simplify.js}
|
||||
\label{sec:simplify.js}
|
||||
% Simplify.JS + turf
|
||||
|
||||
@ -33,7 +33,7 @@ Since it is not clear which case is faster, and given how simple the required ch
|
||||
label=lst:diff-simplify.js
|
||||
]{../lib/simplify-js-alternative/simplify.diff}
|
||||
|
||||
\subsection{The webassembly solution}
|
||||
\subsubsection{The webassembly solution}
|
||||
\label{sec:benchmark-webassembly}
|
||||
|
||||
In scope of this thesis a library will be created that implements the same procedure as Simplify.JS in C code. It will be made available on the web platform through WebAssembly. In the style of the model library it will be called Simplify.wasm. The compiler to use will be Emscripten as it is the standard for porting C code to WebAssembly.
|
||||
@ -108,7 +108,7 @@ label=lst:wasm-util-load-result
|
||||
|
||||
|
||||
|
||||
\subsection{The implementation of a web framework}
|
||||
\subsubsection{The implementation of a web framework}
|
||||
|
||||
The performance comparison of the two methods will be realized in a web page. It will be a built as a front-end web-application that allows the user to specify the input parameters of the benchmark. These parameters are: The polyline to simplify, a range of tolerances to use for simplification and if the so called high quality mode shall be used. By building this application it will be possible to test a variety of use cases on multiple devices. Also the behavior of the algorithms can be researched under different preconditions. In the scope of this thesis a few cases will be investigated. The application structure will now be introduced.
|
||||
|
||||
@ -139,11 +139,8 @@ In the upper right corner the different Use-Cases are listed. These cases implem
|
||||
|
||||
On the bottom the different types of Benchmarks implemented can be seen. They all implement the abstract \texttt{measure} function to return the mean time to run a function specified in the given BenchmarkCase. The \texttt{IterationsBenchmark} runs the function a specified number of times, while the \texttt{OpsPerTimeBenchmark} always runs a certain amount of milliseconds to tun as much iterations as possible. Both methods got their benefits and drawbacks. Using the iterations approach one cannot determine the time the benchmark runs beforehand. With fast devices and a small number of iterations one can even fall in the trap of the duration falling under the accuracy of the timer used. Those results would be unusable of course. It is however a very fast way of determining the speed of a function. And it holds valuable for getting a first approximation of how the algorithms perform over the span of datapoints. The second type, the operations per time benchmark, seems to overcome this problem. It is however prune to garbage collection, engine optimizations and other background processes. \footnote{\path{https://calendar.perfplanet.com/2010/bulletproof-javascript-benchmarks/}}
|
||||
|
||||
Benchmark.js combines these approaches. In a first step it approximates the runtime in a few cycles. From this value it calculates the number of iterations to reach an uncertainty of at most 1\%. Then the samples are gathered. \todo{more}
|
||||
Benchmark.js combines these approaches. In a first step it approximates the runtime in a few cycles. From this value it calculates the number of iterations to reach an uncertainty of at most 1\%. Then the samples are gathered. \todo{more about Benchmark.js}\footnote{\path{http://monsur.hossa.in/2012/12/11/benchmarkjs.html}}
|
||||
|
||||
\footnote{\path{http://monsur.hossa.in/2012/12/11/benchmarkjs.html}}
|
||||
|
||||
\todo[inline]{BenchmarkType}
|
||||
\todo[inline]{BenchmarkSuite}
|
||||
For running multiple benchmarks the class \texttt{BenchmarkSuite} was created. It takes a list of BenchmarkCases and runs them through a BenchmarkType. The Suite manages starting, pausing and stopping of going through list. It updates the statistics gathered on each cycle. By injecting an onCycle method, the \texttt{App} component can give live feedback about the progress.
|
||||
|
||||
\subsubsection{The user interface}
|
@ -1,15 +1,15 @@
|
||||
\section[Algorithm comparison]{Compiling an existing C++ library for use on the web}
|
||||
\subsection[Algorithm comparison]{Compiling an existing C++ library for use on the web}
|
||||
|
||||
In this chapter I will explain how an existing C++ library was utilized compare different simplification algorithms in a web browser. The library is named \textsl{psimpl} and was written in 2011 from Elmar de Koning. It implements various Algorithms used for polyline simplification. This library will be compiled to WebAssembly using the Emscripten compiler. Furthermore a Web-Application will be created for interactively exploring the Algorithms. The main case of application is simplifying polygons, but also polylines will be supported. The data format used to read in the data will be GeoJSON. To maintain topological correctness a intermediate conversion to TopoJSON will be applied if requested.
|
||||
|
||||
|
||||
\subsection{State of the art: psimpl}
|
||||
\subsubsection{State of the art: psimpl}
|
||||
|
||||
\textsl{psimpl} is a generic C++ library for various polyline simplification algorithms. It consists of a single header file \texttt{psimpl.h}. The algorithms implemented are \textsl{Nth point}, \textsl{distance between points}, \textsl{perpendicular distance}, \textsl{Reumann-Witkam}, \textsl{Opheim}, \textsl{Lang}, \textsl{Douglas-Peucker} and \textsl{Douglas-Peucker variation}. It has to be noted, that the \textsl{Douglas-Peucker} implementation uses the \textsl{distance between points} routine, also named the \textsl{radial distance} routine, as preprocessing step just like Simplify.js (Section \ref{sec:simplify.js}). All these algorithms have a similar templated interface. The goal now is to prepare the library for a compiler.
|
||||
|
||||
\todo[inline]{Describe the error statistics function of psimpl}
|
||||
|
||||
\subsection{Compiling to WebAssembly}
|
||||
\subsubsection{Compiling to WebAssembly}
|
||||
|
||||
As in the previous chapter the compiler created by the Emscripten project will be used. This time the code is not directly meant to be consumed by a web application. It is a generic library. There are no entry points defined that Emscripten can export in WebAssembly. So the entry points will be defined in a new package named psimpl-js. It will contain a C++ file that uses the library, the compiled code and the JavaScript files needed for consumption in a JavaScript project. \textsl{psimpl} makes heavy use of C++ template functions which cannot be handled by JavaScript. So there will be entry points written for each exported algorithm. These entry points are the point of intersection between JavaScript and the library. Listing \ref{lst:psimpl-js-entrypoint} shows one example. They all follow the same procedure. First the pointer given by JavaScript is interpreted as a double-pointer in line 2. This is the beginning of the coordinates array. \textsl{psimpl} expects the first and last point of an iterator so the pointer to the last point is calculated (line 3). The appropriate function template from psimpl is instantiated and called with the other given parameters (line 5). The result is stored in an intermediate vector.
|
||||
|
||||
@ -38,7 +38,7 @@ The library code on JavaScript side is similar to the one in chapter \ref{sec:be
|
||||
\todo[inline]{More about javascript glue code with listing callSimplification.}
|
||||
|
||||
|
||||
\subsection{The implementation}
|
||||
\subsubsection{The implementation}
|
||||
|
||||
The implementation is just as in the last chapter a web page and thus JavaScript is used for the interaction. The source code is bundled with Webpack. React is the UI Component library and babel is used to transform JSX to JavaScript. MobX\footnote{\path{https://mobx.js.org/}} is introduced as a state management library. It applies functional reactive programming by giving the utility to declare observable variables and triggering the update of derived state and other observers intelligently. To do that MobX observes the usage of observable variables so that only dependent observers react on updates. In contrast to other state libraries MobX does not require the state to be serializable. Many existing data structures can be observed like objects, arrays and class instances. It also does not constrain the state to a single centralized store like Redux\footnote{\path{https://redux.js.org/}} does. The final state diagram can be seen in listing \ref{fig:integration-state}. It represents the application state in an object model. Since this has drawbacks in showing the information flow the observable variables are marked in red, and computed ones in blue.
|
||||
|
||||
@ -58,7 +58,7 @@ On the bottom the three main state objects can be seen. They are implemented as
|
||||
\paragraph{FeatureState} encapsulates the state of the vector features. Each layer is represented in text form and object format of the GeoJSON standard. The text form is needed as a serializable form for detecting whether the map display needs to update on an action. As the original features come from file or the server, the text representation is the source of truth and the object format derives from it. The simplified features are asynchronously calculated. This process is outsourced to a debounced reaction that updates the state upon finish.
|
||||
|
||||
|
||||
\subsection{The user interface}
|
||||
\subsubsection{The user interface}
|
||||
|
||||
After explaining the state model the User Interface (UI) shall be explained. The interface is implemented in components which are modeled in a shallow hierarchy. They represent and update the application state. In listing \ref{fig:integration-ui} the resulting web page is shown. The labeled regions correspond to the components. Their behavior will be explained in the following.
|
||||
|
5
thesis/chapters/04-results.tex
Normal file
5
thesis/chapters/04-results.tex
Normal file
@ -0,0 +1,5 @@
|
||||
\section{Results}
|
||||
|
||||
\subsection{Benchmark results}
|
||||
|
||||
\subsection{Comparing the results of different algorithms}
|
@ -1,3 +0,0 @@
|
||||
\subsection{Benchmark results}
|
||||
|
||||
\subsection{Comparing the results of different algorithms}
|
@ -1 +1,2 @@
|
||||
for i in $(ls chapters); do printf '\\input{chapters/%s}\n\\newpage\n\n' $i; done > chapters.tex
|
||||
# for i in $(ls chapters); do printf '\\input{chapters/%s}\n\\newpage\n\n' $i; done > chapters.tex
|
||||
for i in $(ls chapters); do printf '\\input{chapters/%s}\n' $i; done > chapters.tex
|
||||
|
3
thesis/main.lof
Normal file
3
thesis/main.lof
Normal file
@ -0,0 +1,3 @@
|
||||
\contentsline {figure}{\numberline {1}{\ignorespaces UML diagram of the benchmarking application}}{15}{figure.1}%
|
||||
\contentsline {figure}{\numberline {2}{\ignorespaces The state model of the application}}{20}{figure.2}%
|
||||
\contentsline {figure}{\numberline {3}{\ignorespaces The user interface for the algorithm comparison.}}{21}{figure.3}%
|
BIN
thesis/main.pdf
BIN
thesis/main.pdf
Binary file not shown.
@ -29,7 +29,11 @@
|
||||
\usepackage{color} % for defining colors in jslistings
|
||||
\input{custom-listing.tex}
|
||||
|
||||
\setlength\parindent{0pt} % disable indentation for paragraphs
|
||||
% \setlength\parindent{0pt} % disable indentation for paragraphs
|
||||
|
||||
% Start sections on new pages
|
||||
\let\stdsection\section
|
||||
\renewcommand\section{\newpage\stdsection}
|
||||
|
||||
\title{Performance comparison of simplification algorithms for polygons in the context of web applications}
|
||||
\author{Alfred Melch}
|
||||
@ -51,6 +55,8 @@ Abstract goes here
|
||||
|
||||
\input{chapters.tex}
|
||||
|
||||
\listoffigures
|
||||
\newpage
|
||||
\lstlistoflistings
|
||||
\newpage
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user