Tutorial on Computer Animation

Kerem Caliskan


This tutorial heavily depends on “Computer Animation – Algorithms & Techniques” book by Rick Parent and the book slides. One can also find definitions from Oxford Dictionary, Wikipedia and many chapter related data from different tutorials / presentations.

This is the second part of my tutorial so if  you wanna start from the beginning please start with this link first.

This tutorial focuses on explaining computer animation and technical algorithm infrastructure it makes use of. The tutorial will cover main topics in Computer animation like :

– Keyframing, story-boarding,

– Kinematics, physically based dynamics modeling,

– Motion capture,

– Scene composition, lighting, and sound track generation

This tutorial will teach the readers about current techniques in computer animation. By the end of the tutorial, the reader should:

– have learned the computational methods for modeling of motions in the physical and virtual world,

– be able to understand how to storyboard, light, compose, and render an animated sequence,

– and be able to read and critically evaluate the current literature in computer animation.

History of Computer Animation

• 1887: Goodwin invented nitrate celluloid film

• 1892: Reynard invented the Praxinoscope


• 1893: Edison invented the Kinetscope

– Only one viewer at a time

• 1894: Lumiere invented the cinematograph

– Camera + Projector + Printer


– Live action films + replacement technique



– The first animation recorded frame by frame


• 1925: “The Adventures of Prince Achmed”

– Lotte Reiniger, 1st feature animation

– Silhouette animation


• 1885: CRT (Cathode Ray Tube)

• 1960: William F. Boeing coins “Computer Graphics”

• 1961: John Whitney, Intro to Alfred Hitchcock’s Vertigo

• 1961: Spacewars, 1st video game

• 1963: Ivan Sutherland, Sketchpad

• 1974: z-buffer, Ed Catmull

• 1975: Phong shading

• 1980: Tron, 1st feature film by CG

• 1986: Luxo Jr. nominated for Oscar

• 1995: Toy Story, 1st full CG feature film

•2009: Avatar, 3D Computer Animation + Computer Vision

Technical Prelimininaries & Introduction to Keyframing

Rendering Pipeline :



  • All vertices of scene in shared 3-D “world” coordinate system.
  • Vertices shaded according to lighting model
  • Scene vertices in 3-D “view” or “camera” coordinate system.
  • Exactly those vertices & portions of polygons in view frustum.


Ray Casting Display Pipeline :


• Animation is typically produced by the following:

–         Transforming the observer position and orientation in world space over time.

–         Modifying the position and orientation of objects in world space over time.

–         Modifying  the shape of objects over time.

–         Modifying display attributes of objects over time.

Transformations :

–         In many applications it is necessary to modify the position, orientation or size of a 2D object.

–         The most widely used transformations are:

  • Translation: move one object to another position
  • Rotation: rotate the object around the origin
  • Scaling: change the size of the object

In order to transform an object, we transform the points that define it.  For instance:

  • Polygon: its vertices.
  • Circle: its center and, perhaps, its radius

Translation: Can be expressed as components or as addition of column vectors



Rotation around axes : A point is rotated around the origin. Rotation is through an angle q.


Rotating around X Axis :


Scaling : Changing the size of an object.



Composition of transformations:

A series of transformations can be multiplied to form a composite transformation.


Example: Rotate an object around an arbitrary point.

  1. Translate P so that the axis is at the origin
  2. Rotate the object
  3. Translate the object so that P returns to its position


Round-off Errors :

Once a transformation matrix has been formed for an object, the object is transformed by simply multiplying all of the object’s object-space points by the object-to-world-space transformation matrix. When doing animation, an object’s points will have to be iteratively transformed over time. However, incremental transformation of world-space points usually leads to the accumulation of round-off errors. For this reason, it is almost always better to modify the transformation from object to world space and reapply the transformation to the object space points rather than repeatedly transform the world space coordinates. To further transform an object which already has a transformation matrix associated with it, one simply has to form a transformation matrix and pre-multiply it by the existing transformation matrix to produce a new one. However, round-off errors also can accumulate when repeatedly modifying a transformation matrix.

Consider the case of the moon orbiting the earth. For sake of simplicity, let’s assume that the center of the earth is at the origin, and initially the moon data is defined with its center at the origin. We have four approaches that could be taken and will illustrate various effects of round-off error.

First the moon data could be transformed out to its orbit position, let’s say (r,0,0). For each frame of animation, we could apply a delta y-axis transformation matrix to the moon’s points where each delta represents the angle it move in one frame. Round-off errors will accumulate in the world-space object points. Points which began as coplanar will no longer be coplanar. This can have undesirable effects, especially in display algorithms which linearly interpolate values along a surface.

The second approach is to build a y-axis transformation matrix that will take the object space points into their current world-space points. For each frame, we concatenate a delta y-axis transformation matrix with the current transformation matrix and then apply that resultant matrix to the moon’s points. Round-off error will accumulate in the transformation matrix. Over time, the matrix will deviate from representing a rigid transformation. Shearing effects will begin to creap into the transformation and angles will cease to be preserved.

The third approach is add the delta value to an accumulating angle variable and then build the y-axis rotation matrix from that angle parameter. This would then be concatenated with the x-axis translation matrix and the resultant matrix would be applied to the original moon points in object space. In this case, round-off error will accumulate in the angle variable and the angle of rotation may begin to deviate from what is desired. This may have undesirable effects when trying to coordinate motions, but the transformation matrix, which is built anew every frame, will not accumulate any errors itself. The transformation will always represent a valid rigid transformation with planarity and angles being preserved.

Orientation Representation

Important factor for representation is that whether it allows for interpolation between key frames or not.

Transformation Matrix Representation :


Fixed Angle Representation : Rotate around global axes.


Euler Angle Representation : Rotate around local axes.


Angle and axis Representation:


Quaternion Representation :

Quaternions are used frequently in computer animations. They’re the most robust representation of orientation. We’ll spend some time around quaternions.

Quaternions are similar to axis-angle representations and quaternions can be used to represent orientation with four values (a scalar and a 3D vector) [s,x,y,z] or [s,v]. Unit quaternions ( q / ( ||q|| ) ) provide a convenient mathematical notation for representing orientations and rotations of objects in three dimensions. Compared to Euler angles they are simpler to compose and avoid the problem of gimbal lock. Compared to rotation matrices they are more numerically stable and may be more efficient.

Representing rotation using quaternions :


Basic Quaternion Math: [s1,v1]+[s2,v2] = [s1+s2,v1+v2]


Interpolation of Rotations using  Quaternion Representation :





Interpolation can be explained as generating the in-between values (like connecting the dots) You should maintain desired control of the interpolation over time and velocity.


Interpolate position of a point in space.

• We can interpolate any changeable value like:

– position

– orientation

– Color (sunrise)

– Intensity (dimmed lights)

– camera focal length (zoom)

• Non-trivial –

– appropriate parameterization of position,

– appropriate interpolating function,

Interpolation function :

How smooth the resulting function needs to be (i.e. continuity),


• Computational expense – order of interpolating polynomial

• Local or global control of the interpolating function


Interpolation vs Approximation

• Actual values at key frames (interpolation)


• Control interpolating function and do not represent actual values (approximation).


Spline eqations:

Cubic curve equations :


• General form: x(u) = U* Ms * Mg

Ms: spline transformation (blending functions)

Mg: geometric constraints (control points)

Natural Cubic Splines:

Between each pair of control points there is a cubic curve. To make sure that curves join together smoothly, the first and second derivative at the end of one curve must equal the the first and second derivative start of the next one. Computing the natural cubic spline essentially involves solving a system of simultaneous equations to make sure this happens.

Unfortunately, while the curve is mathematically smooth, it can wriggle in quite unexpected ways. And it is also computationally problematic. Solving 4n systems for equations is expensive! The changes are not local, change in one point may affect whole system.

Hermite Interpolation :

Hermite curves are very easy to calculate but also very powerful. They are used to smoothly interpolate between key-points (like object movement in keyframe animation or camera control). Understanding the mathematical background of hermite curves will help you to understand the entire family of splines. Maybe you have some experience with 3D programming and have already used them without knowing that (the so called kb-splines, curves with control over tension, continuity and bias are just a special form of the hermite curves).

To calculate a hermite curve you need the following vectors:

  • P1: the start point of the curve
  • T1: the tangent (e.g. direction and speed) to how the curve leaves the start point
  • P2: he endpoint of the curve
  • T2: the tangent (e.g. direction and speed) to how the curves meets the endpoint


Catmull-Rom Splines :

Catmull-Rom splines are a family of cubic interpolating splines formulated such that the tangent at each point pi is calculated using the previous and next point on the spline, _ (pi+1 −pi−1). The geometry matrix is given by:


Unlike a natural cubic spline, a Catmull-Rom spline has local control. This means that modifying one control point only affects the part of the curve near that control point.


Bezier Curves :

Bezier curves are used in computer graphics to produce curves which appear reasonably smooth at all scales (as opposed to polygonal lines, which will not scale nicely). Mathematically, they are a special case of cubic Hermite interpolation (whereas polygonal lines use linear interpolation). What this means is that curves are constructed as a sequence of cubic segments, rather than linear ones. But whereas Hermite interpolating polynomials are constructed in terms of derivatives at endpoints, Bezier curves use a construction due to Sergei Bernstein, in which the interpolating polynomials depend on certain control points. The mathematics of these curves is classical, but it was a French automobile engineer Pierre Bezier who introduced their use in computer graphics.

Bezier curves are more useful than any other type  mentioned so far; however, they still do not achieve much local control. Increasing the number of control points does lead to slightly more complex curves, but as you can see from the following diagram, the detail suffers due to the nature of blending all the curve points together.


Properties of Bezier Curves:

• Passes through start and end points

• Lies in the convex hull

Joining Bezier Curves:

• Start and end points are same (C0)

• Choose adjacent points to start and end in the same

line (C1)


C2 continuity is not generally used in cubic Bézier

curves. Because the information of the current

segment will fix the first three points of the next curve


Controlling the speed:

• Using an interpolating piecewise spline determine the piecewise P(u) equations between control points

• Determine the arc-length of the segments by sampling u

• Compute the average velocity of the object between intervals by arc-length/time

• Move at constant speeds (average velocity) between intervals.

Path following:

• Apart from the position of the object, the orientation of the object also has to be considered.


Frenet frame:

If an object is moving along a path, the orientation can be made directly dependent on the properties of the curve (i.e., tangent and curvature).

image099 image101

Animation languages:

• Abilities:

– I/O operations for graphical objects

– Support hierarchical composition of objects

– A time variable

–  Interpolation functions

– Transformations

– Rendering-parameters

– Camera attributes

– Producing, viewing, and storing of one of more frames of animation

• A program written in an animation language is

referred to as a script.

Shape Deformation Forward  / Inverse Kinematics

Warping an Object

• Displace one vertex of an object

– And as a consequence make neighbor vertices move with the displaced vertex



• Use an attenuation function to determine the amount of displacement for the other vertices:


2D Grid Deformation

  • 1974 film “Hunger
  • Draw object on grid
  • Deform grid points
  • Use bilinear interpolation to recompute vertex positions on deformed grid

• Initially construct a 2D grid around the object as a local coordinate system aligned with the global axes

– Global to local transformation can be done by simple translate and scale


• Then distort the grid by moving the vertices of the grid.

– This will distort the local coordinate system and hence the vertices of the object will be relocated in the global coordinate system

The location of the vertex is found using bilinear interpolation, first interpolate two vertices and then the other two vertices and again interpolate between these found points to find the vertices in between.


Global deformations

  • Alan Barr, SIGGRAPH ’84
  • A 3×3 transformation matrix affects all vertices

–         P’=M(P) .dot. P

  • M(P) can taper, twist, bend…

Global tapering:


Twist about an axis:


Free-form Deformation (FFD)

  • Sederberg, SIGGRAPH ’86
  • Position geometric object in local coordinate space
  • Build local coordinate representation
  • Deform local coordinate space and thus deform geometry
  • Similar to 2-D grid deformation
  • Define 3-D lattice surrounding geometry
  • Move grid points of lattice and deform geometry accordingly
  • Local coordinate system is initially defined by three (perhaps non orthogonal) vectors


Animation using FFDs:


Hierarchical Kinematic Modeling

Some definitions

Articulated objects: Hierarchical objects connected end to end to form multi body jointed chains

Manipulators: A sequence of objects connected in a

chain by joints. Example: robot arm

• The rigid objects between joints are called links.

The last link in a series of links is called the end

effector (e.g. the hand of a robot arm)

• The local coordinate system associated with

each joint is referred to as the frame.

Kinematics: Studying the movement of objects without considering the forces involved in producing the movement.

Dynamics: Studying the underlying forces that produce the movement.

• Hierarchical modeling: Organizing objects in a treelike structure and specifying movement parameters between their components.


Simple vs. Complex Joints

• Joints that allow motion in one directions have one degree of freedom.

• Complex joints have more degrees of freedom and they can be represented as a series simple joints connected to each other by zero length links.

– Examples:

• Ball-and-socket joint (3 DOF)


• Planar joint (2 DOF)


Hierarchical Models

• Represented as trees and nodes are connected by arcs.

• The highest node of the tree is called the root node which corresponds to the root object whose position is known in the global coordinate system.

• The position of an intermediate node in the tree can be found by position of the root node and the transformations on the path from root to that node.

• Nodes represent object parts (i.e., links)

• Arcs represent joints


Information stored in nodes and arcs :


Positions of vertices

• Are found by traversing the tree from top to bottom and concatenating the transformations at the joints.


Rotations at the joints and appendages:

image135 image137

Forward Kinematics

• Finding the location (and orientation) of the end effectors by applying all the joint transformations sequentially.All the intermediate joint angles are given by the user

• Depth-first tree traversal of the tree representations and a stack to store intermediate composition of transformation matrices is used. OpenGL’s pushMatrix / popMatrix functions can be used easily to accomplish this.


Local Coordinate Frames

• Denavit-Hartenberg Notation from robotics .


Relating two successive frames :




Inverse Kinematics

Forward kinematics involves a transformation from joint angles to 3D positions. Given some articulated figure, we can describe the figure by relating each joint angle to the limb it is attached to. Given the angles in question, we can straightforwardly calculate the end points of the limbs using coordinate transforms. Inverse kinematics asks, given a desired end point or position for an articulated figure, can we calculate the angles?

Forward kinematics is simple, because a set of joint angles specifies exactly one position. Inverse kinematics, however, is difficult: most real systems are underconstrained, so for a given goal position, there could be infinite solutions (i.e. many different joint configurations could lead to the same endpoint). The field of robotics has developed many inverse kinematics systems which, due to their constraints, have closed-form solutions. The inverse kinematics problem for computer animation is much harder because it must work for arbitrary figures, like human arms or legs.

Analytic computation for simple cases is possible




The Jacobian

• In many complex joints however, such analytic solutions are not possible.

• Therefore we use the Jacobian matrix to find the correct joint angle increments that will lead us to the final end effectors configuration

• The Jacobian matrix is a matrix of partial derivatives

– Each entry shows how much the change in an input parameter effects an output parameter



Computing the Jacobian:


Rigid Body Simulation

• Reaction of rigid bodies to forces such as:

– Gravity

– Viscosity

– Friction

– Forces from collisions


• When applied to objects, these forces induce linear and angular accelerations

The hardest part of rigid body simulation is modeling the interactions that occur between bodies in contact. The most commonly used approaches are penalty methods, followed by analytic methods. Both of these approaches are constraint ­based, meaning that constraint forces at the contact points are continually computed and applied to deter­ mine the accelerations of the bodies. Impulse ­based simulation is a departure from these approaches, in that there are no explicit constraints to be maintained at contact points. Rather, all contact interactions between bodies are affected through collisions; rolling, sliding, resting, and colliding contact are all modeled in this way. The approach has several advantages, including simplicity, robustness, parallelizability, and an ability to efficiently simulate classes of systems that are difficult to simulate using constraint­ based methods.

Rigid Body Simulation Cycle


The difference from standard physics is that in computer animation the motion of objects at discrete times steps is studied along with significant events and their aftermath.


• In real life, the forces change as the rigid body changes its position, orientation, and velocity over the time.

• It is not the best approach to use the acceleration at

the beginning of the time interval to compute the

velocity at the end (known as Euler integration)



RK2 Method


First compute the derivative of y(t) at to (which we call k1). We use k1 to get an initial estimate for y(to+h) labelled y*(to+h) . From y*(to+h) we can get an estimate for the derivative of y(t) at to+h, which we will call k2. We then use the average of these two derivatives, k3, to arrive at our final estimate of y(to+h) labelled y*'(to+h).

Rotational Motion

• For non-point objects, the mass extent of the object should be considered.

Angular velocity: The rate at which the object is rotating irrespective of its linear velocity.

– the direction of the vector gives the axis of orientation

– the magnitude gives the revolutions per unit time

Linear Velocity of a Rotating Point


Center of Mass

• If mass values are provided on some discrete points on the object (i.e. vertices), the total mass and the center of mass is given byLinear force F = m . a


Rigid Body Dynamics Controlling Groups of objects

Bodies in Contact


– Both kinematic and dynamic components

Kinematic: Do determine whether two objects

collide or not. Only dependent on the position and orientations of the objects and how they change over time.

Dynamic: What happens after collision, what forces are exchanged, how do they affect objects’ motions.

Collision handling

• Kinematic response

• Take actions after the collision occurs (the penalty method).

• Back up time to the first instant the collision occurs and determine the appropriate response.

Kinematic Response

• Particle-Plane collision



Particle’s position is computed at every time step (particle is moving at a constant speed)


when image191 we understand that the particle has collided with the plane in the time interval ti-1 and ti.

When collision is detected, the component of the velocity vector in the normal direction is negated by subtracting it twice from the original velocity vector.

• To model the loss of energy during collision a damping factor 0<k<1 is multiplied with the normal component when it is subtracted the second time.




The Penalty Method

• A point is penalized for penetrating another object


Penalty Spring

The spring produces an upward acceleration with a = – kd/m

Difficult to determine the mass and the spring constants that will produce a realistic effect

Testing Collisions Between Planar Polyhedra

• Bounding box tests

– If they indicate a collision then more elaborate tests may be performed to make sure that the collision exists

• Convex polyhedra – point test

– The point should be on the same side (inside) for all the polygons that make up the polyhedra

• Concave polyheadra – point test

– Even-odd rule may be used to count the intersections of a ray emanating from the point to an arbitrary direction with the polygon faces

• if its odd à point inside the polyhedra

• if its even à point outside the polyhedra

Point test may not be conclusive and it may need to test the edges as well.


The normal that defines the plane of intersection is used to calculate the response

– point-face penetration

• The normal of the face is used

– Edge-edge intersection

• Cross product of the edges is used as the normal

Backing Up Time

• Can be computationally expensive when too many collisions occur

• Time is backed up to the point of impact, an impulse force is computed, and the time is moved forward again

• The exact collision time can be found by binary search by setting L=ti-1 and U=ti and searching between L and U, dividing the interval by half each time.

The point of collision can also be found by assuming a linear path, constant-velocity motion. This provides an approximation.


Computing the impulse force

The normal to the surface of contact, n, is determined. The relative positions of the contact points with respect to the center of masses are computed.


The relative velocity of the contact points in the direction of the normal, n, is computed



Enforcing constraints

• Enforcing constraints to physically based animations can be done by introducing additional forces in the system, such as:

– Springs

– Internal energy constraints (such as distance of vertices from the center of mass)

• Constraints can be hard or soft constraints.

Controlling group of objects

Particle systems – Flock behavior – Autonomous behavior


Particle Systems

Some common assumptions due to large number of particles

– Particles do not collide with other particles

– Particles do not cast shadows, except in an aggregate sense

– Particles only cast shadows on the rest of the environment, not on each other

– Particles do not reflect light, they are each modeled as light emitting objects

A frame of a particle system

• Any new particles that are born during this frame are generated, each new particle is assigned attributes.

• Any particles that have exceeded their allocated life span are terminated.

• The reaming particles are animated and their shading parameters changed according to the controlling processes then the particles are rendered.

Life of a particle :


Particle Generation

• A random number of particles around an average can be generated.

• Similar number of particles should be terminated to ensure constant number of particles at each frame.


Attributes of Particles

• Position

• Velocity

• Shape Parameters

• Color

• Transparency

• Lifetime (in number of frames)

Particle Animation

• Effect of forces modeled on the environment is computed as acceleration on the particle

– Gravity, wind, force fields, collisions with environment objects

• Acceleration is used to update the particle’s velocity

• Average velocity is used to update the position

• Shape can be a function of velocity

– An ellipse that elongates with respect to velocity

Flock Behavior

• Local perception and behavior of bodies (i.e. flock members)

– Limited intelligence and simple physics

• Emergent behavior

– Flying in a diamond shape or V shape

– Splitting, merging

Two main forces to keep a collection of objects behaving like a flock

– Collision avoidance

• With the environment

• With other members

– Flock centering

• Can be achieved using localized control

Local control

Computationally desirable

• More realistic

• Three processes modeled in local control

– Physics

– Perception

– Reasoning and reaction

• Negotiates among the various demands due to perception

• Collision avoidance

• Flock centering

• Velocity matching

Interacting with other flock members

• Attractive force

– To move with the flock

• Repulsive force

– A shorter range force for not to collide with neighbors

Flock Leader

In real life leaders change periodically

– The wind resistance is strongest for the leader of a flying flock of birds.

• But in animation, it may be easier to have one designated leader whose motion is scripted along a path.

Negotiating the Motion

• Three low-level controllers

– Collision avoidance

– Velocity matching

– Flock centering

• A priority based weighting scheme can be used to combine the individual requests from the low level controllers.


Collision avoidance :

Using a force field to direct flock members away from an obstacle.


Problems with Force Field Technique



Modeling Flight :


Rotation types of flight            Forces of flight

Lifting forces :


Important Points on Modeling Flight

• Turning is effected by horizontal lift

• Increasing pitch increases drag

• Increasing speed increases lift

Autonomous Behavior


Natural Phenomena

Modeling the plants & clouds.

Using fractals seems to appropriate to model plants, because of observed self-similarity at various levels


• But how to define the fractal? What about the stochastic behavior?


• In addition to rendering static plants we should also think about the motion to animate:

– Motion due to environmental conditions such as wind

– Plant growth


• Introduced and developed in 1968 by the Hungarian theoretical biologist and botanist Aristid Lindenmayer

• Formal Grammar consisting of a set of production rules.

• Famously used to model the growth processes of plant development.

• Can be used to generate self-similar fractals

• A procedural technique to model objects

D0L system

• A deterministic and context-free L-System

– Implies each non-terminal has a single grammar rule associated with it.

– And the left part of a grammar rule consists of a single non-terminal (i.e., no-context information)

Geometric Interpretation of L-Systems

• Geometric Replacement:

– Replace each symbol with a geometric element.


Turtle Graphics :

• Replace each symbol with a drawing command.


• Given the initial state of the cursor and the linear and rotational step sizes, a string can be used to draw a shape .

• The state of the cursor at any point can be given by the current position and heading of the cursor .


Linearity problem :

• With only the four commands of draw / move / turn_left / turn_right one can only generate linear shapes.

• Bracketed L -systems are introduced to provide branching

• In the generation rules, a left branch indicates to push the current state on the stack and a right branch indicates a popping of the state from the stack and setting it as the current state.

• The stack structures provides unlimited branching

Bracketed L-Systems

Bracketed L -systems are introduced to provide branching


image243 image245

Non deterministic  & Stochastic

• In the generation rules, a left branch indicates to push the current state on the stack and a right branch indicates a popping of the state from the stack and setting it as the current state.

• The stack structures provides unlimited branching

Animating Plant Growth

• Changes in topology during growth.

• Elongation of existing structures.

• Elongation of structures may be animated by small linear step size and using rules such as F à FF

• Changes in topology is animated by the bracketing mechanism.

– However, we should not scan and render the final

generated string left to right, the rendering should be

done as we proceed

Parametric L-Systems

• Symbols may have one or more parameters associated with them

– We can specify different linear, angular step sizes.




Realistic Modeling of Clouds


Visual Characteristics of Clouds

• Clouds have volumetrically varying amorphous structure with detail at may different scales.

• Cloud formation often results from swirling, bubbling, and turbulent processes.

• Clouds have several illumination and shading characteristics.

Volumetric Cloud Modeling

• Two level hierarchy:

– Implicit volumes to represent to global structure of the cloud (the cloud macrostructure)

• Modeled by implicit functions (such as spheres)

– Procedural methods to define turbulent, noise characteristics at a smaller scale (the cloud microstructure)

• Modeled by turbulent volume densities

Volumetric Cloud Modeling

• The macro and micro models are combined to define a volumetric density function (vdf) over a 3D volumetric space

• The densities of the implicit volumes can be combined by using a cubic blending function and a weighted sum


• To combine the densities from implicit primitives with the turbulence-based densities a user specified blend percentage can be used (60% to 80% gives good results).

Modeling and Animating Articulated Figures: Modeling the Arm, Walking Facial Animation

Terms Related to Human Body Animation

Sagittal plane :Perpendicular to the ground and divides the body into right and left halves.

Coronal plane: Perpendicular to the ground and divides the body into front and back halves.

Transverse plane : Parallel to the ground and divides the body into top and bottom halves.

Distal : Away from the attachment of the limb.

Proximal: Toward the attachment of the limb.

Flexion: Movement of the joint that decreases the angle between two bones.

Extension: Movement of the joint that increases the angle between two bones.

Challenges in Human Modeling

• Human figure is a very familiar form

• Human form is very complex

– About 200 degrees of freedom

– Some of the parts are deformable

• Humanlike motion is not computationally well defined

– There is no one definitive motion that is humanlike

– Different characteristics for different people

Modeling the Arm: Reaching and Grasping

• To simplify the modeling process, it is usually assumed that the arm operates independently from the other body parts.

– not realistic

– to provide realism one can add additional joints in a preprocessing step and position the body and make it ready for the independently considered arm motion.

Basic Arm Model



Sometimes the forearm is modeled differently. Because in reality the forearm rotation is not associated with a localized joint.

– The two forearm bones rotate around each other.

– We can associate this rotation with the elbow or the wrist, or sometimes a virtual joint in the middle is used to handle forearm rotation


• In reality each joint has specific limits

– Example: the elbow flexes at most to 20 degrees and extends to 160 degrees

• Some of these limits depend on the situation

• Some of these limits depend on the situation

– Example: It is difficult to fully extend the knee when one is bending at the hip.

Inverse Kinematics

• The Jacobian technique can be used and the solution can be biased towards desired joint angles.

• To produce more humanlike motion the Jacobian can be replaced by a procedural approach

– The joints farther away from the hand has more effect on the position of the hand

– The joints closer to the hand are used to perform fine orientation changes


Alternatively, the user can specify the plane between shoulder, elbow, and the wrist


Shoulder can be modeled as a ball joint or 3 separate joints of 1 DOF.


Reaching around obstacles

• The volume of space swept by the limb should not intersect with the obstacles in a scene

• Several path planning algorithms have been developed


We can use a gradient of Genetic algorithm approach to move the end effectors towards the goal .


• Strength may be incorporated into the motion planning of the arm

• For underconstrained problems (i.e., problems with many solutions), the solution space can be searched for the configurations which places least amount of strain on the figure.

– Strain can be computed by computing the torque at each joint.

– Comfort is defined as the ratio of the currently requested torque and the maximum possible torque.


• It is a cyclic motion

– Acyclic components

• Turning, tripping

• Responsible for transportation of the body and maintaining balance

• Dynamics is more important in walking

• Walking is dynamically stable, but it is not statically stable

– E.g., When a body freezes in the middle of the walk, it may fall on the ground

• Experimentally gathered data or a set of adjustable control parameters are used.

– Example parameters:

• Stride length

• Hip rotation

• Foot placement

• State transition diagrams are used to specify the walking process

• Kinematics can be used for the general walking motion and forces may be computed to determine the motion of the upper body

Walk Cycle :


Running Cycle :


Pelvic transport rotation and tilt



Knee Flexion

• Bending at the knee joint prevents the leg to penetrate the floor during pelvic tilt.

• It also helps to absorb shock


Ankle and Toe Joints

• Ankle and toe joints help flatten out the rotation of the pelvis above the foot as well as to absorb some shock.

Specifying a new walk

• The animator specifies the kinematic values for pelvic movement, foot placement, and foot trajectories. The rest is determined by inverse kinematics.


• Dynamic models may help produce more realistic motion.

– However, the animator looses control over some parameters

• Simplifications

– Some dynamics effects are ignored, such as the effect of the swing leg on balance

– Forces are considered constant over some time interval

– Leg model is simplified (to a small DOF)

– Several components (e.g. horizontal and vertical) are computed separately and combined .

Facial Animation

• Face is a deformable object.

• Lip-synching

– Animation of the movement of the lips, the muscle deformation of the tongue, the articulation of the jaw, and the deformation of the surrounding face during speech

• Cartoon animation

• Realistic character animation


Facial Models

• Acquisition of the geometry of the head

• Acquisition of the motion

– How does the geometry change

Face geometry

– Polygon models

– Splines

– Subdivision surfaces


Parameterized Models

• Conformational parameters

– 25 parameters in Parke’s model

– Symmetry between the sides of the face is assumed

– 5 parameters to control the shape of the forehead, cheekbone, cheek hollow, chin, and neck

– 13 scale distances between facial features

– 5 parameters to translate chin, nose, and eyebrow

• Expressive parameters.

Animating the Face

• Simplest approach is to define a set of key poses

– Animation is produced by interpolating between the positions of their corresponding vertices in the two key poses

– Disadvantage: parts of the facial model are not individually controllable by the animator

• What are the primitive motions of the face?

• How many degrees of freedom are there in the face?

Facial Action Coding System (FACS)

• 46 basic facial movements, called Action Units (AUs) are defined, and used in combination to describe all facial expressions.

– Examples:

• Lower brow, raise inner brow, wink, raise cheek, drop jaw.

• Disadvantages:

– It is descriptive, not generative

– It is not time based

– Facial movements are analyzed only relative to a neutral pose

– FACS describes facial expressions, not speech

Muscle Models

• Three types of muscles to model

– Linear

• Contracts and pulls one point (point of insertion) toward another (point of attachment)

– Sheet

• Parallel array of muscles. Attached to a line instead of a single point

– Sphincter

• Contracts radially toward an imaginary center.

• Three aspects differentiate one musclebased model from the other

– the geometry of the muscle-skin arrangement

• Are they modeled on the surface or attached to a structural layer beneath the skin

– the skin model used

– the muscle model used

• The deformation of other points may attenuate based on the distance from the point of insertion and angle of deviation from the displacement vector

Animating Cloth & Motion Capture

Simple Draping

• Draping will occur as a cloth is hanged from a fixed number of support points.

• The cloth is represented as a two dimensional grid of points located in 3D.

– Certain grid points are fixed

• Convex-hull of the fixed points determine where the draping will occur.

•Two phases

– The draped surface is approximated with the convex hull of the constrained points.

– Iterative relaxation process where other grid points are displaced.

• Process continues until the maximum displacement is below a threshold.

• Vertices on the grid are labeled as interior or exterior depending on whether they are inside the convex-hull or not.

• The grid points along the line between two constrained points are determined.


Motion Capture Systems

• The recording of RAW motion data for later use.

• Several different systems on the market

• camera / optical / infrared

• gyroscopes / accelerometers

• magnetic / fiber optic

Standard pipeline


Part of the Process

1. Calibrate Cameras

2. Put Markers on Subject

3. Calibrate Subject

4. Check Quality of Calibration

5. Record Motion

6. Cleanup Point Cloud

7. Label Markers in Point Cloud

8. Cleanup Resulting Data

9. Export Data

10.Import Data into Package of Choice


Usage area


• Motion Analysis & Research

• Games

• Films & Animated Shorts


• Human Factor Studies

• Performance Arts

• Virtual Reality Simulations

• Education

• etc.


METU Ceng732 Lesson Slides

Rick Parent’s book and book slides.

From Oxford English Dictionary
Anderson, Joseph and Barbara Fisher. “The Myth of Persistence of Vision.” Journal of the University Film Association XXX:4 (Fall 1978): 3-8.
Joseph Anderson & Barbara Anderson (1993). The Myth of Persistence of Vision Revisited. Journal of Film and Video 45:3–12.
Raster Graphics Handbook. Publisher. Conrac. Date. 1980
Cutting, J.E., Perception With An Eye For Motion, Mit Press, Cambridge, pp. 1986.



The Illusion of Life: Disney Animation at  mazon.com SMPTE.org


Quaternions from Wolfram MathWorld

Exponential Map from Wolfram MathWorld

Spline Curves and Surfaces

Game AI Resources: Pathfinding

Warping and Morphing of Graphical Objects by J. Gomes, L. Darsa, and L. Velho, Morgan-Kaufmann Publishers

Chris Welman’s Master’s thesis, Inverse Kinematics and Geometric Constraints for Articulated Figure Manipulation – PDF

Paul Nylander’s Physics

Robot Dynamics: Equations and Algorithms by Roy Featherstone and David Orin – PDF

Spacetime constraints by Adrew Witkin and Michael Kass – PDF

Leave a Reply