Final-year Geometric Modelling Course

Chapter 10

Constraints

If you had this geometric model:

and you changed the position of the projection, you'd probably want this

rather than this

Getting this sort of thing right is the job of geometric constraints.

In the example, if you specify the position of the hole relative to the projection, as opposed to positioning it at an absolute location, you get the right answer.

How can we go about specifying relative geometry?  Let's look at 2D.  Suppose you have a line specified by the
absolute coordinates of the ends:









Then we can position a third point and build a triangle by specifying the lengths of the two new sides

In fact, we get two possible triangles, the second shown by the dotted lines.  This means that the problem is slightly underconstrained.

We can also have a problem that's overconstrained

Here we've specified the lengths of both diagonals and got one wrong, so a corner (not necessarily the top right one) is trying to be in two places at once.  Note that if the second diagonal had also been specified to be the square-root of 2 then the problem would still be overconstrained, but the constraints would be consistent.

Here's another underconstrained problem

The square can freely turn into an infinite number of possible parallelograms.  This is a mechanism (in this case a parallel motion), and underconstrained systems can be a powerful way of specifying how mechanisms will work.

Note that it's not necessary for constraints to be lengths.  Here we fully constrain the last example by adding an extra angle.

A single problem can be both underconstrained and overconstrained at once.

Here the bottom quadrilateral has both diagonals specified and so is overconstrained, and the top one has no diagonal or angles and so is underconstrained.

The most common way of specifying constraints is to specify lengths and angles, but we can have others too.  For example we can say that three points must be colinear,

or that two lines must be parallel.

It is also very useful to be able to specify inequalities as constraints.  For example in the first triangle we saw, we could fully constrain the problem by adding the extra constraint that the y coordinate of p3 had to be greater than that of p2 thereby getting rid of the dotted triangle as a possibility.

As usual all this works just as well in 3D as it does in 2D, so we can specify a bunch of constraints to tie down the parts of a geometric model relative to each other, as opposed to relative to the coordinate system.  Then when we move one part of the model, the other parts move to keep the thing consistent (if we've got the constraints right).

In 3D we generally need more constraints to fix something than in 2D, and there are more possible types of constraints (for example, we can require a line to be parallel to a plane, and so on).  The reson for this is that three-dimensional objects have more degrees of freedom that two-dimensional ones.  In 2D an object can slide about and rotate through an angle:

It has three degrees of freedom: x, y, and A.  But a three-dimensional object has 6 - it can translate in x, y, and z, and it can rotate about all three coordinate axes.
 

Resolving constraints

If we have some constraints specified, how can the computer work out what geometry they represent?  There are a number of ways of solving such constraint resolution problems, and research on this is still going on.  One of the simplest and most effective methods is to use optimization.

Let's look at the triangle problem again.  What do we know?  We have the x and y coordinates of p1 and p2:  (xp1, yp1) and (xp2, yp2), and we know l13 and l23.   We want to know (xp3, yp3).  Pythagoras tells us

(xp3 - xp1)2 + (yp3 - yp1)2l132 = 0   --------- (1)

and

(xp3 - xp2)2 + (yp3 - yp2)2l232 = 0   --------- (2).

As you might expect, these are just the equations of the circles centred on p1 and p2 that intersect to give the answers (remember there are 2) - this is the way you would solve the problem with a rule and compasses.  If both equations equal 0 and we add their squares together the sum will only be 0 when both are true:

F = [(xp3 - xp1)2 + (yp3 - yp1)2l132]2 + [(xp3 - xp2)2 + (yp3 - yp2)2l232]2

and we want

F = 0.

We now have an equation with two varaibles (in red).  What we need are the values of those variables that make F equal to 0.  (Why do we have to sum the squares, incidentally, not just the functions?)

Lets take a specific example where p1 = (1,2), p2 = (5,4), l13 = 4, and l23 = 2.5:

F for this problem is

F = [(xp3 - 1)2 + (yp3 - 2)2 -  42]2 + [(xp3 - 5)2 + (yp3 - 4)2 -  2.52]2

and it looks like this:

As you can see, the value of F just touches 0 at two points - those are the possible positions for p3.  For a constraints problem with lots of constraints we'll have more than two varaibles, of course - we'll have as many as the problem has degrees of freedom.  But we can still write down an equation for F.  The surface above becomes a hypersurface in a space with one more dimension than the number of degrees of freedom, so it's a bit hard to make a picture of.  But because all the parts of F are squared it keeps the same characteristics - it's always positive except at the solutions, where it's 0.

So.  Resolving constraints reduces to finding values of all the variables (the degrees of freedom of the problem) in F which make F = 0.  This is actually quite easy.  All we do is to start at some point and walk downhill.  We can take the partial derivatives of F with respect to all the variables in it - these tell us which way to walk.  We can then use Newton-Raphson or any number of other optimization algorithms to do the walking.

A problem with this approach is that it only finds one answer, and we may have many (even the triangle had 2).  You can pin things down by adding further constraints like the inequality constraint on yp3 mentioned above.  Inequalities are done with max and min functions, and F becomes:

F = [(xp3 - xp1)2 + (yp3 - yp1)2l132]2 + [(xp3 - xp2)2 + (yp3 - yp2)2l232]2 + [max(yp2 - yp3, 0)]2

Note that we have to square the max function.  If we don't we can't differentiate F as it has a sharp crease in it, and our optimization methods won't work.  For the example, F now looks like this (rotated so you can see the detail):

The addition of the inequality constraint has lifted the value of F at the position of p3 that we didn't want off the floor, so now we only have one answer, that is one place where F = 0.

But this F surface shows a possible problem with optimization.  If we start top left and walk downhill, we may get stuck in the local minimum where our old answer that we got rid of was.  We'll know that this has happened as F won't be 0 there.  But we can't escape by walking downhill.  This is a general problem with all optimization methods - how do we find the global optimum rather than just local not-so-good ones.  It has recieved an enourmous amount of research attention, and is really nothing to do with constraints specifically.  If you're interested in one way of solving it, look up simulated annealing.

All this presupposes that F represents a constraints problem that is neither over-  nor under-constrained, and so has a unique answer.  If the problem is over-constrained this will become aparent as F will never get down to 0.  Unfortunately, it is often vary hard to distinguish between this and the problem of local optima we just saw, because of the difficulties optimization methods have with finding a global optimum.

Even if the method can find the global optimum - that is the place where F has the lowest value of all - and we find that the problem is over-constrained because that value of F is greater than 0, we can't find out what to do about it.  All we know is the value of F.  What we'd like to know, of course, is which one(s) of our constraints (and there may be scores of them) to throw away.  There has been a lot of work on constraint graphs to try to resolve this.  Here, the links in the constraint problem (and a link can be any constraint, not just a line with a length) join the entities to be constrained.  We end up with a graph of objects linked by constraints and can then analyse this to do things like splitting it into sub-graphs that are over-, under-, and fully-constrained.  With luck, these sub graphs will be simple enough to give us information on what changes we need to make to fully-constrain the whole problem.

What about under-constraints and our optimization technique?  Suppose we throw away one of the length constraints and the inequality constraint from the triangle example and F becomes

F = [(xp3 - 1)2 + (yp3 - 2)2 -  42]2

This looks like this

Instead of having isolated points where F is 0, we have a whole continuous valley.  In this case it represents the point p3 rotating about p1 at a radius of 4.  This is actually rather useful - if we find any point on the bottom of the valley using our optimization method, we can then move along the path where F = 0 and, as we do so, we'll be tracing out what the mechanism (which is what our under-constrained problem represents) does.  Sometimes, if the mechanism is very under-constrained, the path turns into a puddle and there are an infinite number of ways to go at any point (think what happens to the triangle problem if we release p1 so it's rotating about p2 and p3 is in turn rotating about it).

A great deal of work has been done at Bath on constraints, both by my group, and by Tony Medland and Glen Mullineux.


Back to: Final-year Geometric Modelling Course Home Page

© Adrian Bowyer 1999