In this chapter several algorithms for polyline simplification will be explained. For each algorithm a short summary of the routine will be given. At the end a comparison will be drawn to determine the method in use for benchmarking.
%In this chapter the history behind polyline simplification is shown. Several algorithm in the chronological order of their creation will be explained. At the end comparison will be drawn to determine the method used for benchmarking.
\paragraph{n-th point algorithm} This algorithm is fairly simplistic. It was described in 1966 by Tobler. The routine is to select every n-th coordinate of the polyline to retain. The larger the value of n is, the greater the simplification will be. \parencite{clayton1985cartographic}
\paragraph{The Random-point routine} is derived from the n-th point algorithm. It sections the line into parts containing n consecutive positions. From each section a random point is chosen to construct the simplified line. \parencite{shi2006performance}
\paragraph{Radial distance algorithm} Another simple algorithm to reduce points clustered too closely together. The algorithm will sequentially go through the line and eliminate all points whose distance to the current key is shorter than a given tolerance limit. As soon as a point with greater distance is found, it becomes the new key. \parencite{koning2011polyline}
\paragraph{Perpendicular distance algorithm} Again a tolerance limit is given. The measure to check against is the perpendicular distance of a point to the line connecting its two neighbors. All points that exceed this limit are retained. \parencite{koning2011polyline}
\paragraph{Reumann-Witkam simplification} As the name implies this algorithm was developed by Reumann and Witkam. In 1974 they described the routine that constructs a "corridor/search area" by placing two parallel lines in the direction of its initial tangent. The width of this corridor is user specified. Then the successive points will be checked until a point outside of this area is found. Its predecessor becomes a key and the two points mark the new tangent for the search area. This procedure is repeated until the last point is reached. \parencite{reumann1974optimizing}
\paragraph{Zhao-Saalfeld simplification} This routine, also called the sleeve-fitting polyline simplification, developed in 1997 is similar to the Reumann-Witkam algorithm. Its goal is to fit as many consecutive points in the search area. The corridor is however not aligned to the initial tangent but rather to the last point in the sequence. From the starting point on, successors get added as long as all in-between points fit in the sleeve. If the constraint fails a new sleeve will be started from the last point in the previous section. \parencite{zhao1997linear}
\paragraph{The Opheim simplification} Opheim extends the Reumann-Witkam algorithm in 1982 by constraining the search area. To do that two parameters \textsf{dmin} and \textsf{dmax} are given. From the key point on the last point inside a radial distance search region defined by \textsf{dmin} is taken to form the direction of the search corridor. If there is no point inside this region the subsequent point is taken. Then the process from the Reumann-Witkam algorithm is applied with the corridor constrained to a maximum distance of \textsf{dmax}. \parencite{opheim1982fast}
\paragraph{Lang simplification} Lang described this algorithm in 1969. The search area is defined by a specified number of points to look ahead of the key point. A line is constructed from the key point to the last point in the search area. If the perpendicular distance of all intermediate points to this line is below a tolerance limit, they will be removed and the last point is the new key. Otherwise the search area is shrunk by excluding this last point until the requirement is met or there are no more intermediate points. All the algorithms before operated on the line sequentially and have a linear time complexity. This one also operates sequentially, but one of the critics about the Lang algorithm is that it requires too much computer time \parencite{douglas1973algorithms}. The complexity of this algorithm is $\mathcal{O}(m^n)$ with \textsf{m} being the number of positions to look ahead. \parencite{lang1969rules}
\paragraph{Douglas-Peucker simplification} David H. Douglas and Thomas K. Peucker developed this algorithm in 1973 as an improvement to the by then predominant Lang algorithm. It is the first global routine described here. A global routine considers the entire line for the simplification process and comes closest to imitating manual simplification techniques \parencite{clayton1985cartographic}. The algorithm starts with constructing a line between the first point (anchor) and last point (floating point) of the feature. The perpendicular distance of all points in between those two is calculated. The intermediate point furthest away from the line will become the new floating point on the condition that its perpendicular distance is greater than the specified tolerance. Otherwise the line segment is deemed suitable to represent the whole line. In this case the floating point is considered the new anchor and the last point will serve as floating point again (DP). The worst case complexity of this algorithm is $\mathcal{O}(nm)$ with $\mathcal{O}(n\log{}m)$ being the average complexity \parencite{koning2011polyline}. The m here is the number of points in the resulting line which is not known beforehand. \parencite{douglas1973algorithms}
\paragraph{Visvalingam-Whyatt simplification} This is another global point routine. It was developed in 1993. Visvalingam and Wyatt use a area-based method to rank the points by their significance. To do that the "effective area" of each point has to be calculated. This is the area the point spans up with its adjoining points \parencite{shi2006performance}. Then the points with the least effective area get iteratively eliminated, and its neighbors effective area recalculated, until there are only two points left. At each elimination the point gets stored in a list alongside with its associated area. This is the effective area of that point or the associated area of the previous point in case the latter one is higher. This way the algorithm can be used for scale dependent and scale-independent generalizations. \parencite{visvalingam1993line}
The algorithms shown here are the most common used simplification algorithms in cartography and geographic information systems. The usage of one algorithm stands out however. It is the Douglas-Peucker algorithm. In "Performance Evaluation of Line Simplification Algorithms for Vector Generalization" Shi and Cheung conclude that "the Douglas-Peucker algorithm was the most effective to preserve the shape of the line and the most accurate in terms of position" \parencite{shi2006performance}. Its complexity however is not ideal for web-based applications. The solution is to preprocess the line with the linear-time radial distance algorithm to reduce point clusters. This solution will be further discussed in section \ref{ch:simplify.js}.