💾 Archived View for spam.works › mirrors › textfiles › programming › gitr01.txt captured on 2023-11-14 at 11:50:01.

View Raw

More Information

⬅️ Previous capture (2023-06-16)

-=-=-=-=-=-=-


             The Mathematics of Three-Dimensional Manipulations
                        and Transformations.
----------------------------------------------------------------------------
Edition 1.2, by Trip V. Reece, June 1992.
e-mail:  tvreece@caticsuf.csufresno.edu
cis: Trip V. Reece	70620,2371
Permission is given to copy and distribute without alteration.

Table of Contents
-----------------

I.    Mapping 3d onto 2d
      A. Skrinkage of X,Y dimensions
      B. Vanishing Point
     @ C. Aspect Ratio

II.   Translation and Scaling
      A. The easy way
      B. Via Matrices

III.  Rotation
      A. Rotating a 2d coordinate
      B. Rotation around another point
      C. Rotating a 3d coordinate
      D. The Matrix Solution

IV.   Matrices Unveiled
      A. The Affine Transformation Generalized
      B. Choices, Choices

V.   Z-Buffering
     A. Why?
     B. How?

VI.  Techniques
     A. Bresenham's Drawing algorithm



I.  Mapping Three-Dimensional coordinates onto a Two-Dimensional Plane

     Although the title for this entry appears stultifying, indeed formidable,
it is little other than identifying the relationships between distance and
perspective.  The most important concept in 3d display is the apparent
shrinkage of objects as they recede.  The other concept to keep in mind is
the idea of a vanishing point.  It would be easier to simply transcribe the
formula here, but to properly implement a 3-dimensional system, a thorough
knowledge and understanding of EACH aspect of the system is VITAL.
Otherwise, how can you track down slow operations and optimize efficiently?
     The goal of this mapping function is to be able to pass it the
three-dimensional coordinates and have it return the two dimensional
coordinates for the monitor screen.  To understand how the mapping works, try
to visualize a transparent, flat, rigid, plastic sheet with dots marked on it
in a regular pattern, like so:
   . . . . .
   . . . . .
   . . . . .
   . . . . .

     Now, imagine each of these dots to be 3cm from the dot above/below or to
the left or right of it.  Imagine now, if you will, another, perfectly
identical sheet of plastic exactly 3cm from the first, parallel to the first
and further away from you than the first.  Now, imagining how it would look,
you would notice that the X and Y dimensions are contracted with greater
intensity the further away a sheet of plastic is from you.  The marks on the
plastic appear to be "pulled in" to the point directly in front of your
eyeball, contracted >towards< that point.  The first concept, that of
diminishing size with distance is given by the relationship:

   Size (= Original Size / Distance

     Note: The (= symbol is my ascii approximation for an alpha, the symbol
for >proportional to<.  This, however, does not account for the vanishing
point, the second concept used here.  To get another perspective on how the
vanishing point works, find a flat floor with a regular pattern of parallel
lines on it (they're easy to find) and stand vertical in front of it.  Now,
using a single eye, look towards the horizon, and with two rulers, align
their edges along with the parallel lines on the floor.  Hold the rulers in
front of your face, also vertical, and each should point at a slightly
different angle from the other.  Now, look at the rulers that you have lined
up.  If you follow their edges to the point where they would have intersected
if they were long enough, you (should)/will find that they meet at a point
directly in front of your eye.  This point is called the vanishing point, and
is used by artists to construct realistic perspective drawings.
     Typically, to achieve a more realistic rendering, a cut-off distance is
used to make points that are more than a certain distance away disappear.  To
avoid the magnifying effect that would be evident when rendering an object
that is of a distance less than one all objects closer than 1 unit of
distance away are cut off.  The reason an object so close would become
magnified is because the relationship  Size (= original_size / distance,
would make 1/distance larger than 1, therefore multiplying the original size
by a value greater than 1.
     We can now reveal the first mapping function, which although incomplete,
provides a clear understanding of its function.

                                                      ( Equation 1-1 )
  Precondition:  Z is greater than or equal to 1.
  Value: Shrinkage = ( 1 - scale / Z )
  New X =  Old_X - Shrinkage * Old_X
  New Y =  Old_Y - Shrinkage * Old_Y

     Now, we have declared Shrinkage to be the value of one minus the value
of scale divided by the distance plus one.  Ignore scale for now, the value Z
is the Z coordinate of the three-dimensional point.  If you multiplied out
the expanded form of Shrinkage into the value of New X you would get:
  New X =  Old_X - Old_X * ( 1 - scale/Z )
  which is the same as:
  New X = Old_X - Old_X + Old_X * scale / Z
  which is the same as:
  New X = Old_X / Z  * scale

     Why do we calculate the value of Shrinkage at all?  Because it is used
in both the New X and New Y caluclations, and also because the picture this
would produce does not look "right."  Let's first reason out the meaning of
the "scale" value.  During practice, it looks really weird for an object to
recede so far into the distance that it nearly touches the vanishing point.
Let's simply say that scale's value is most apparent when it equals 1.  The
theory doesn't yet explain why the value of scale is needed, so let's examin
it a bit more carefully.
     For a realistic image, the vanishing point is usually from one fourth
the total height to one third of the total height down from the top of the
image, and usually in the middle of the left/right length.  I can't explain
why this looks "right," it may look completely wrong to some, but I've heard
no complaints yet.  Now, to revise the formulas:

                                                      ( Equation 1-2 )
   PreCondition:  Z is greater or equal to 1.
   Shrinkage = ( 1 - scale / Z )
   New X = Old_X - Shrinkage * Old_X
   New Y = Old_Y - Shrinkage * ( Old_Y - EyeLevel )

     A few assumptions are made here, first: that the origin is located in
the center of the screen, and that EyeLevel is a positive number.  EyeLevel
tells us how HIGH UP from the origin the vanishing point should be.
Typically, if the vertical resolution of the screen is MaxY then the maximum
positive coordinate would be MaxY / 2 and if this value were divided by 3
then you would have EyeLevel = MaxY / 6.  What the subtraction of MaxY does
to the incoming Y coordinates is to translate everything UP.  Yes, it sounds
backwards, but when you subtract inside the parantheses, it becomes addition
outisde of the parantheses, so it really is ADDED, but added BEFORE it is
mapped onto the 2-d plane.  Now, why isn't the X coordinate similarly
translated?  Because: in a typical picture, the vanishing point is in the
center of the horizontal direction, which is where the origin is in this
implementation.  If the origin were anywhere else on the screen, it >would<
have to be translated to the center of the screen.
     This should now help to explain how the value of "scale" works.  In
practice, a large value of scale tends to make the horizon much closer.  In
fact, this is exactly what scale does for us.  It narrows the depth of field,
in a sense.  If you increase the value of scale from 1 to a large value,
objects that previously stretched back towards the vanishing point now seem
to be much thinner, and hence more realistic.  The value of scale is
dependant upon the horizontal resolution of the screen.  I have found that a
setting scale equal to the horizontal resolution provides reliable results.
Fiddle around with this value, but after a while, you will want the scale to
remain constant in your program.
     One other nagging little problem with all of this is the dreaded aspect
ratio.  Depending on the graphics mode you are in, if the maximum X
resolution is not the same as the maximum Y resolution then you must, that is
>MUST< scale either the final X or final Y coordinate by this ratio.  You
will want to compute this ratio at the beginning of your program so as to
avoid any unnecessary divides.  This is how to do it:
  Ratio = MaxYresolution / MaxXresolution
  Now, this ratio is usually, 1.333333... for modes such as 640x480, or
800x600, or 1024x768, but may change as video displays improve and older
modes, such as 320x200, are used.  To convert your "final" X and Y
coordinates into something that you can plot on the monitor you must multiply
the X coordinate by this ratio before you plot the point.  Alternatively, you
can computer the reciprocal of the ratio, and multiply the "final" Y value by
this "inverse ratio" to scale it that way.  But always remember to avoid
division wherever possible, multiplication of non-integers is much faster
than division by anything, especially zero.  :-)
     That's it for mapping! Feel free to use equation 1-2 in any of your
programs!


II.  Translation and Scaling in Three Dimensions

     Starting with a definition: Translating a three-dimensional coordinate
means to simply add a value to each of the X, Y, and Z coordinates.  Scaling
a three-dimensional coordinate means to simply multiply each of the X, Y, and
Z coordinates by a value.  This is irrelevant in dealing with single points,
but when this is performed to a grouping of points, such as those in a cube,
you can translate each coordinate by the same values for X Y and Z, to move
the cube about in three dimensions.  Scaling the coordinates definitely
provides for an interesting (i.e. entertaining) effect.  Objects will grow or
shirnk depending on the value it is scaled by.  Scaling a single axis can
elongate or shrink an object along that axis.
     Typically, scaling occurs infrequently in a 3d modelling system, whereas
translations are the basis of animation and interactive displays.  Note:
Translation provides for movement in ONLY >along< the X, Y, and Z axes.
Rotation is another matter entirely.


III.  Rotation

     Since 3d geometry is really no different from 2d geometry, except with
regards to the complexity of visualizing the math, rotations in 3d are
similarly rooted in 2d geometry.  Warning: If you haven't had trigonometry
experience, you will not understand this- read a book on it.
     Now, everything in digital computers is usually expressed in integers or
in rounded off floating-point numbers.  Coordinates of objects in either 2 or
3 dimensions is usually done in Cartesian, or Rectangular coordinates.  This
means we have distinct values for X, Y (and/or Z) that are unique to that
point.  In dealing with rectangular coordinates, when we want to rotate a
point about the origin (0,0) in two-dimensions, we would convert the X,Y
coordinate into an (R,Theta) polar coordinate, add the angle we wish to
rotate the point to Theta, and then convert (R,New_Theta) back into
rectangular.  This is honestly one of the quickest methods to rotate a single
point about the origin.  The following identities are useful, indeed
>necessary< to compute the rotated X and Y values.
        ___________
  R = \/ X^2 + Y^2
  or:
  R = ( X^2 + Y^2 ) ^ 0.5

  X = R * Cos (Theta)
  Y = R * Sin (Theta)

  Tangent ( Y / X ) = Theta
  Theta = ArcTangent ( Y / X )

  Sine ( Y / R ) = Theta
  Theta = ArcSin ( Y / R )

  Cosine ( X / R ) = Theta
  Theta = ArcCos ( X / R )

     The value of R remains constant during the rotation.  We first compute
the value of R then.

  R = sqrt ( Old_X * Old_X + Old_Y * Old_Y )

     Now, we must compute the OLD value of theta.

  Theta = ArcTan ( Old_Y / Old_X )

     Now, we must compute the new values of X and Y.

  New_X = R * Cos ( Theta + RotateAngle )
  New_Y = R * Sin ( Theta + RotateAngle )

     Ahh, but wait!  This will not work all the time!  Why not?  Because when
Old_X is negative, the value of Theta will be the value of the reference
angle, NOT the angle from the origin!  A simple fix is to use the following
code instead:

  New_X = R * Cos ( Theta + RotateAngle )
  If Old_X < 0 then New_X = New_X + Pi

-or- if you are working in degrees:

  If Old_X < 0 then New_X = New_X + 180

  and not forgetting to include:

  New_Y = R * Sin ( Theta + RotateAngle )

     The conversion between radians (it has pi in it) into degrees is to
multiply the radian measure by ( 180 / Pi ) and to convert degrees into
radians, you multiply the degree measue by ( Pi / 180 ).  What this means is
that 2*Pi radians equals 360 degrees, 180 degrees equals Pi radians, and 0
degrees equals 0 radians.  When you have a negative Old_X value, the value of
Theta returned by ArcTangent ( Y / X ) is the angle measured from the left
handed side of the x axis, this angle is the angle measured DOWN from the left
side, not from the right side of the x axis.  Adding Pi radians (180 degrees)
gives us the angle from the right side-- a positive value of Y will cause the
ArcTangent function to return a negative angle.  This negative angle is the
angle UP from the left side of the x axis; adding 180 degrees gives us the
angle from the right side.  When X >and< Y are negative, the ArcTangent
function gives us a positive angle, the angle DOWN from the left side of the
x axis.  Adding 180 degrees clearly gives us the angle from the right side.
Remember now, that angles are measured CounterClockwise from the right side
of the x axis to the angle Theta.  If we did not know the original
coordinates of X and Y, we would not be able to determine the correct value
of Theta because the ArcTangent function only returns values between -90
degrees and +90 degrees (or -Pi/2 radians to +Pi/2 radians.)
     Suppose we didn't want to rotate the point about the origin, but about
another point on the X,Y plane?  Simplicity itself!  Look at the rotation as
a rotation about the origin, only the origin is displaced by the coordinates
of the point about which you are really rotating (re-read that sentence if
necessary.)  Call that second point the origin, and you're set.  If you make
the point you want to rotate relative to the origin the same as it is
relative to the second point, then translate it back, you've got it.  So, if
we call the point about which we want to rotate Rot_X, Rot_Y then we are left
with:

  Old_X2 = Old_X1 - Rot_X
  Old_Y2 = Old_Y1 - Rot_Y

     Now, perform the rotation with Old_X2 and Old_Y2 in place of Old_X and
Old_Y respectively.  Then, after you finish that, translate it back with:

  New_X2 = New_X1 + Rot_X
  New_Y2 = New_Y1 + Rot_Y

     Now, if that isn't enough obfuscation, I shall write out the complete
rotation with all variables in their proper place:

  R = (  (Old_X - Rot_X)^2 + (Old_Y - Rot_Y)^2 ) ^ 0.5
  Theta = ArcTangent ( Y / X )
  If (Old_X - Rot_X) < 0 then Theta = Theta + Pi
  New_X = R * Cos (Theta + RotateAngle) + Rot_X
  New_Y = R * Sin (Theta + RotateAngle) + Rot_Y

     To re-iterate this, RotateAngle is that angle in RADIANS
COUNTERCLOCKWISE that you wish to rotate the original coordinate,
(Old_X,Old_Y), about the point (Rot_X,Rot_Y).

     To implement this in a 3-dimensional frame, simply define what axis
about which to wish to rotate a point (or group of points, as in an object)
and in place of X and Y put the two axes that are left (NOT the axis about
which you are rotating) in their place.  Now, the Rot_X and Rot_Y coordinates
are the coordinates of the LINE that you are rotating the point(s) about.  If
you wanted to rotate an object about the origin, leave Rot_x and Rot_Y zero.
If you wish to rotate the object about it's center but along the Z axis,
calculate the average of all X values and all Y values for that object and
substitute those in for Rot_X and Rot_Y.  Remember that you can also rotate
in the X or Y directions also!  If you wished to rotate an object about the X
axis, around the origin for the X axis you would make Rot_X, Rot_Y equal to
the Z and Y coordinates of the X axis: ZERO.  You would then put the Z and Y
dimension coordinates in place of the values of Old_X and Old_Y (they're just
variable identifiers, mathematically, you may rotate about any axis.)  If you
mix up the order of Z and Y, (Frankly, I'm nt even sure about the "correct"
order.) then the only side-effect will be that your rotations are reversed in
direction.  It's all a matter of perspective, >you< decide what direction you
wish to look head-on for each of the axes.  In my examples, I'm assuming the
Z axis goes into the screen and X and Y are left/right and top/bottom
respectively.

--Matrices
     How is this solved with matrices?  It is not as easy to rotate about
another point with matrices, but rotating about a single axis is relatively
straightforward.  Consider the matrix*matrix problem below:

Rotation about X axis:

  |   1    0    0    0  |   | 1 |     |  A  |
  |   0   cT   sT    0  | * | x |  =  |  B  |
  |   0  -sT  -cT    0  |   | y |     |  C  |
  |   0    0    0    1  |   | z |     |  D  |

[ cT = Cos(Theta), sT = Sin(Theta) ]
If you go ahead with the matrix multiplication (I'll write it all out for
those who are rusty this first time,)  then you get a result of:

A = 1*1 + 0*x + 0*y + 0*z = 1
B = 0*1 + cos(Theta)*x + sin(Theta)*y + 0*z = x*cos(Theta) + y*sin(Theta)
C = 0*1 + -sin(Theta)*x + -cos(Theta)*y + 0*z = x*-sin(Theta) - y*cos(Theta)
D = 0*1 + 0*x + 0*y + 1*z = z

A = 1, B= x*cos(Theta)+y*sin(Theta), C= -x*sin(Theta)-y*cos(Theta), D= z
New_X = B
New_Y = C
New_Z = D

This is the rotation of a point (x,y,z) about the X axis, for Theta degrees.
I'd guess this is a correct rotation... But the value of C seems to be
incorrect.  However, I will stick to non-matrix calculations to eliminate all
those unnecessary ???*0 + ???*0 + ???*1 operations.  [ I consent that these
transforms make no sense to me. ]


Rotation about Y axis:

  |  cT    0  -sT    0  |   | 1 |     |  A  |
  |   0    1    0    0  | * | x |  =  |  B  |
  | -sT    0   cT    0  |   | y |     |  C  |
  |   0    0    0    1  |   | z |     |  D  |

In this case:

A = 1*cos(Theta) + 0*x + y*-sin(Theta) + 0*z = cos(Theta) - y*sin(Theta)
B = 0*1 + 1*x + 0*y + 0*z = x
C = 1*-sin(Theta) + 0*x + y*cos(Theta) + 0*z = -sin(Theta) + y*cos(Theta)
D = 0*1 + 0*x + 0*y + 1*z = z

Again, this really looks pretty wrong.  Perhaps I'm not multiplying matrices
correctly, or maybe the matrices are set up wrongly.  However, if it works-
use it.


Rotation about Z axis:

  |  cT   sT    0    0  |   | 1 |     |  A  |
  | -sT   cT    0    0  | * | x |  =  |  B  |
  |   0    0    1    0  |   | y |     |  C  |
  |   0    0    0    1  |   | z |     |  D  |

In this case:

A = 1*cos(Theta) + x*sin(Theta) + 0*y + 0*z = cos(Theta) + x*sin(Theta)
B = -sin(Theta)*1 + x*cos(Theta) + 0*y + 0*z = -sin(Theta) + x*cos(Theta)
C = 0*1 + 0*x + 1*y + 0*z = y
D = 0*1 + 0*x + 0*y + 1*z = z

Now, this may make some sense.  Rotating about the z axis should only affect
the x and y coordinates.  Hmm, in this case A and B >must< hold the new X and
Y coordinates, since C contains an unchanged value of y.  Well, that's all
about rotation with matrices that I'm prepared to be flamed for.

[ I honestly don't understand how this works, if it does at all. ]


IV.  Matrices Unveiled

This section is postponed until I can get some accurate information about the
matrices for rotation and other affine transformations.


V.   Z-Buffering - The how's and the why's.

     Okay. translation and rotation may be all fine and dandy, but suppose I
want to display my teapot that I've got encoded in a very nice compact data
structure, but it looks transparent.  In fact, it looks like a wireframe,
which it is!
     -How to go about Z-Buffering-
     First, get the memory.  Wherever possible, grab memory.  You will need
for this exercise: enough memory for 3 times the video resolution in pixels,
for example:  320x200 = 64000.  You will need 320x200x3 = 192000 bytes for
this, unless you choose to perform some in-memory compression, an adivsable
technique! This assumes you are using 8-bit color, i.e. 256 color.
     First, reserve one byte for each pixel on the screen as part of a
"virtual" screen.  This byte will store the color for that pixel once the 3d
mapping and depth checking is complete.  Initialize this array with the
background color of your choice.
     Next, reserve two bytes (using the type integer, for example) for each
pixel on the virtual screen in another array (yes this spans two segments of
memory or more.)  These two bytes store the distance of the pixel in the
range from 0 .. 65535.  This is the Z-Buffer itself.  In actuality, it
shouldn't be called a Z buffer in a perspective rendering modeller.  It is
properly a Distance-Buffer, but in an orthographic environment, the depth is
the same as the Z coordinate, so the name stuck.  Oh well.  Initialize this
array of distances with the value of 65535, or whatever your yon/hither
values dictate.  It is a good idea to set a maximum distance on the objects
you render, as well as a minimum distance- this prevents unnecessary
calculations that would end up in an object rendered that only appeared as
three small pixels not worthy of being called a polygon, or an impossibly
huge triangle (for example) that covers up everything else on the screen.
Use your judgement.
     To draw a single frame now:  Scroll through your list of objects that
are in visible sight.  Calculate the distance of each vertex from your
eyeball using the pythagorean distance formula:

               ______________________
  Distance = \/  X^2  +  Y^2  +  Z^2
  or:
  Distance =  ( X^2 + Y^2 + Z^2 ) ^ 0.5

     Now, calculate the coordinates of the mapped point on the 2d plane for
this vertex.  (This should account for the vanishing point, as well as
perspective.)  Now, compare the value of this distance with the value of the
distance contained in the array of distances at the pixel location you just
found the 2d mapped coordinates of.  If this vertex is of a less distance
then replace the byte in the array of color with the color of this pixel, and
store the calculated distance into the "Z-Buffer" array of distances at the
location where you found it to lie on the 2d plane.  The gist of this all is
to compare the distance of a point with the distance found in the array, if
the distance already in the array is less than the distance of the pixel we
are checking, then throw out that new pixel.  A pixel closer to you will
"cover up" a pixel that is farther away.
     To Z-Buffer a polygon, the vertices of the polygon must be projected
onto the Z-Buffer plane, and checked for overlapping.  Even if the vertices
aren't visible, there may be points on the polygon that >are< visible due to
a "hole" in another polygon that just happens to be blocking the rest of the
polygon we are rendering.  Instead of calculating the distance to each pixel
of the polygon (entailing far too many squareds and square root operations to
be feasible on most 80x86's) a form of interpolation may be used provided the
four vertices and their distances; and upon the condition that the polygon is
perfectly flat.  Of course Z-Buffering can also apply to non-polygonal
shapes, however, it radically increases rendering time to use non-polygons.
     After all the shapes/polygons have been rendered onto the virtual
screen, the screen of visible colors must be copied into the video area.
This can be quickened by storing the color array in video memory to begin
with, and page in the new screens as they are available.  Be sure not to have
the video screen showing that you are currently Z-Buffering!  That would give
away the secret! :-)
     Another improvement to make that is simplified when Z-Buffering is
shading polygons.  Simply calculate the angle between the polygon normal (the
perpendicular to the polygon's surface) and the light source (a "global"
value) and take the cosine of this angle to get the intensity of the light
reflected off the polygon.  Be sure to multiply the value of the cosine by
the maximum allowable intensity, since cosine returns values ranging from
-1,...+1.  Negative values either should be negated to get the positive
value, or it could signify a hidden surface, i.e. a surface that faces away
from the viewer.


VI.  Techniques and Algorithms

     Bresenham's Line Drawing Algorithm
        -The importance of this algorithm is evident when it is implemented
         with ONLY integer math.  This can drastically improve calculation
         times.

     This function takes in the X and Y coordinates of the endpoints of the
line to be drawn.  To make it more versatille, it also will accept the color
of the points it is to plot.

     The whole trick behind how this works is based on the mathematical fact
that multiplication is really only repeated addition, and division is really
only repeated subtraction.  There are two cases for the line drawing
algorithm, 1: a line with a slope >= 1, and 2:  a line with a slope < 1.
Since slope is  (delta Y)/(delta X) this means that the algorithm is
dependent upon the fact that deltaY or delta X is the larger of the two.
Let's consider case 2, where delta X exceeds delta Y.

     The algorithm "follows" along the X-axis, incrementing the x value of
     the line it plots by one each time, and it calculates the value of y on
     each run through the x increment loop.  A variable which could be called
     "Cycle" is first initialized to the value of one half of delta X.  This
     variable, Cycle will be incremented by the value of delta Y and then
     checked against the value of delta X.  If cycle is greater than delta X
     then the value of the y-coordinate to be plotted is upped by one
     (assuming the line's slope is positive) and the value of Cycle has the
     value of delta X subtracted from it.  The value of the x coordinate of
     the point where the algorithm plots the new point each run through the
     loop is incremented by one, unconditionally.  Now, another run through
     the loop will give us:  increment Cycle by delta Y, check to see if it's
     greater than delta X, (If YES: Cycle=cycle-delta X  and
     Y_plot=Y_plot+1, else If NO: then don't change Y_plot.)  Now,
     X_plot=X_plot+1.

     Perhaps an table of the values of Cycle, X_plot, & Y_plot as the loop is
executed will better explain what is happening.  The values passed to the
algorithm are called  x1, y1, x2, y2.

delta Y = y2 - y1 = 4     ( arbitrary points chosen for clarity )
delta X = x2 - x1 = 9

  Cycle    |   X_Plot   |   Y_Plot
-----------+------------+------------
         4 |   x1       |    y1
   4+4=  8 |   x1+1     |    y1
8+4=12-9=3 |   x1+2     |    y1+1          ( y1 is incremented because cycle
   3+4=  7 |   x1+3     |    y1+1            overflowed delta X )
7+4=11-9=2 |   x1+4     |    y1+2          ( <-- overflowed again! )
   2+4=  6 |   x1+5     |    y1+2
6+4=10-9=1 |   x1+6     |    y1+3          ( <-- another overflow )
   1+4=  5 |   x1+7     |    y1+3
5+4=9-9= 0 |   x1+8     |    y1+4          ( cycle must never = delta x )
   0+4=  4 |   x1+9     |    y1+4

Here the loop ends because delta X = 9 and x1+9 equals delta X.  Actually,
this is leaving out an important detail:  if y2 is below y1 then instead of
adding a positive 1 for each overflow of Cycle, a 1 must be subtracted.  This
can be done easily by setting a variable called Increment equal to positive
one or negative one at the beginning of the routine.

In the case of delta Y being larger than delta X, follow the same procedure
as above, but the loop will follow the line vertically, so replace delta X
with delta Y and vice versa.

It can be shown that the pattern that Cycle takes on needs not be calculated
beyond a certain point.  As soon as Cycle regains the value it had to begin
with (delta X divided by two, rounded down) it will follow the same pattern
again and again.  I suspect that Cycle will always begin to repeat at or
before delta_X number of positions down the list.  It is possible that the
algorithm could beoptimized even further- calculate the first delta_X values
for Cycle, perform a check and identify each position where cycle
>>decreases<<  (i.e. it has overflowed and Y_Plot must be incremented)- place
a one at each point where Y_Plot must be incremented, and then follow along
>this< array and increment Y_Plot (or X_Plot as the case may be) where a one
occurs.  And if you have the Megs and Megs of memory available, you could
precalculate each of these arrays for every possible combination of delta Y
and delta X... Maybe that's going a bit far now-  :-).

Why use the Bresenham line drawing algorithm?  It's blazingly fast.  Consider
the work needed in dividing half a zillion line slope calculations, in
real-time.  This routine only requires addition, subtraction, and a check.
The intel processors have been optimized to all hell for wicked speed at
addition/subtraction/comparisons, whereas division will ALWAYS be slow slow
slow...

... More algorithms to come! ...

Note:  I intend this article to be a growing text to help decipher the cryptic
       world of 3d, as I learn more I will add to this list.  I also know that
       I, like most people of the younger generation, am prone to errors.
       Tell me about them! I want to learn more about this, can't find or
       afford most of the books on this subject, so anything you can tell me
       will be appreciated! and included if it makes any sort of sense!


-Trip V. Reece
Any comments, suggestions, errata reports please email me.
e-mail:  tvreece@caticsuf.csufresno.edu

i