Path Optimisations Maths

Path Optimisations Maths

One of the subproblems I found with the whiteboard plotter was that it didn't like a lot of small movements and that my SVG to Path process would often generate a lot of tiny steps which could be consolidated into single steps without a lot of 'visible' changes.

Firstly, let's look at why I want to 'optimise' the path, and what that means. In a previous post, I looked at optimising the pen-up movements to reduce time. This was a lossless optimisation because the pen-up movements do not affect the final drawing. However when we look at optimising the path, we are going to be removing points, and the fundamental question is why.

When using the plotter, each point (or waypoint) on a path, takes time to draw to - in theory in a perfect plotter, you would only be affected by the total distance travelled. But with my plotter, each waypoint does add a delay. If you have a straight line, but that line consists of many waypoints along the path, it can be optimised to remove the internal waypoints and give a faster plot. There is also the problem of 'judder' of starting and stopping the pen many times - a problem that something like grbl has overcome with look-ahead and motion control allowing it to accelerate and decelerate around waypoints without actually stopping at each one.

The next question is, given that my plotter has inbuilt inaccuracy (the gondola hangs off the surface to stop it rubbing off the whiteboard), are there other points that might not be exactly on the straight lines, but a little bit off it, that could also be removed, with little or no perceivable change of the final picture.

So the first question here, given this as an example, is "Why is that point removed?", followed by "Well if that point, why not this other point?". Is there some combination of points that should be removed - do you identify points as those that need to remain?

I did some preliminary thinking on my own, and then I did what would be called in academia a 'literature research', and once I had found the right keywords for it, I found that the algorithms that had been defined by academic research were very similar to my first-generation implementation. What I also found was that there was an excellent python library https://pypi.org/project/simplification/ which implements many of these routines already.

The first thing that I realised was there is no correct answer. There are different reasons to optimise, and different metrics which we might consider to be an 'acceptable' answer. Different starting images would benefit from different algorithms. Secondly, there is no good way at knowing if a result was 'acceptable' or not, even given those starting metrics.

My method was a slightly more primitive version of the Ramer-Douglas-Peucker algorithm, which works like this:

Firstly, pick a value for a 'channel' width. This is the smallest offset of a feature that will be allowed.

For any path, begin with the start and endpoints, then it iterates:

  1. Work out which point is 'furthest' off the straight line between points.
  2. If this is 'outside' the channel, then this is now two paths - from the first to this, and from this to the end, and we go back to step 1 for both these sub-paths.
  3. If it was inside the channel (and feel free to argue if 'on the line' counted as inside or outside), then that, and all the other points on the line can be removed as they are too minor a deviation to worry about.
  4. This segment is complete, and if there are any more segments (from segment splits in step 3.

What this is doing is identifying 'features', i.e. points which stand out from the straight line, is a 'feature', and it doesn't care if this path is a line path or an edge to a shape: it is saying 'these points here stand out' and tries to keep them, before removing the rest.

When trying to work out if the optimisation is a 'good' or 'bad' one it depends on what the objective is. Generally the goal is to remove points, but is it to maintain the features? Or the General shape? The line length? Or the area contained (if it's a path). These may not be compatible objectives and different starting shapes, or algorithms, may lead to different outputs.

As an example, increasing the channel width can have some significant returns on the number of points removed, but not affect the length too much, before a certain point where the image becomes unrecognisable.

In this picture, the bee on the right has nearly 35% less points then the bee on the left.
Graphing the % change in length compared to the % change in segments.

From using the bee as an example (a SVG that was loaded into our tooling), we found using a value of '15' on the library (no guaranteeing what that means in real units), allowed a lot of loss of segments whilst still maintaining good quality detail on curves. Going beyond this would begin to loose features such as the ends of antenna and legs.

A better metric was found to also look at the area curve:

In this, Area and length (% left axis) begin to drop off, as the number of segments are removed (%, right axis). Whilst the length drops off linearly, the area begins to drop of more sharply beyond the 20 to 25 mark, and this ties in with the experienced visuals:

Getting Nerdy

I felt this to be an interesting maths problem, and, to be honest, I'm really happy that someone else has written the library, but I did do the generic maths to work it out, and because I did, here it is.

Let's give some of these points some generic coordinates - the end points can be A (ax, ay) and B (bx, by). Then the target coordinate can be T (tx, ty), and the coordinate we want to know U (ux, uy)

Work out the formula for the line. Given two points, this is reasonably easy - it's the format

\(y=mx +c\)

The gradient (m) for the line will be the (change in y) / (change in x) therefore

\(m = \frac{by-ay}{bx - ax}\)

Working out the c value is then done by putting the values back in for either the A or B points and a tiny bit of re-arranging the following:

\(ay = ax\frac{by-ay}{bx - ax} + c\)

The generic formula for *any* line perpendicular to that is

\(y = \frac{1}{m}x + d\)

and all those lines are parallel so it's only the new d value that changes.

\(\frac{1}{m}\) is just turning the fraction upside down, so the formula for the inverse lines are:

\(y = \frac{bx-ax}{by-ay}x + d\)

This can be stored now for all the points because as above, its only the D that is calculated.

For each point T (tx,ty) we can now calculate the d value for the inverse line its on, rearanging and applying the known coordinate T

\(ty = \frac{bx-ax}{by-ay}tx + d\)

Now we have the coordinates for both lines, and we can work out where the lines have equal values of x and y

\(ux\frac{bx-ax}{by-ay} + d = ux\frac{by-ay}{bx - ax} + c\) rearranged...

\(ux\frac{bx -ax}{by-ay} - ux\frac{by-ay}{bx - ax} = c - d\) rearranged...

\(ux(\frac{bx -ax}{by-ay} - \frac{by-ay}{bx - ax}) = c - d\) rearranged leaving:

\(ux  = \frac{c - d}{\frac{bx -ax}{by-ay} - \frac{by-ay}{bx - ax}}\)

At this point everything on the right is a known number, so ux can be calculated. Throwing that back in above to either of line  equations, we can now calculate the value uy.

To work out the distance of the point T from the line is now relatively simple, because we have a right-angle triangle:

\(length^2 = (ux-tx)^2 + (uy-ty)^2\)

As I say, I'm happy there's a library that does this all for you and uses numpy so it's more optimised at looping over the data.

Now there is one edge case that I worked out (and I didn't check the algorithm to whether it implements this, which is what if a point isn't actually 'between' the points, then the 'distance off' should be the distance to the nearest-end-point rather then the distance to the line.

Mastodon