How Pixar created the ultrarealistic animated film, The Blue Umbrella

The_Blue_Umbrella_(2013_film)_poster

Written and directed by Saschka Unseld, The Blue Umbrella is a Pixar short telling the story of blue umbrella that meets a red umbrella on the streets. As their respective owners part ways, however, the blue umbrella desperately tries to get back to the red umbrella. While the blue umbrella battles weather and traffic, other street objects, including a personified street gutter, mailing box, and a building pipe vent, try to help bring the blue umbrella to the red umbrella. At the end of the short, the blue umbrella, though battered and dirty, is united with the red umbrella and its happily ever after for the two.

The level of photorealism is unprecedented in this short. Upon the first viewing, it seems as though the video is a blend of live action and animation. The facial expressions on various street objects, however, give away the secret that it is in fact a computer generated animation that uses photorealistic shading, lighting, and compositing. Like many of its shorts, The Blue Umbrella, is used to test yet another one of Pixar’s new technologies, in this case a special global illumination technology. By mathematically modeling each beam of light in every scene, everything that is animated, including rain, looks extremely realistic.

THE BLUE UMBRELLA

download faces_pixar-clip_07

While photorealism was not the original goal of this piece, the idea of making the short photorealistic was formulated during the process of production. As the idea solidified, the team working on this project had to grow increasingly conscious of how to make things look more real. Making efforts such as not showing human faces and setting the scene to be at nighttime to shroud everything in darkness were conscious decisions to preserve the effect. Another technique they utilized was a shallow depth of field, meaning that the camera focuses on objects closer to it rather than further from it. While the shallow depth of field aided in the quest for photorealism, it also helped set a lyrical mood and artistic atmosphere for the film.

Pixar_Blue_Umbrella-still-500x330

Camera movement was especially important for the film cinematography and achieving the goals realism. Because computer generated films are made by a machine, movements, especially camera movements, tend to be extremely smooth. In real life, camera movements can be jerky and angles tend change more often. The point from where the camera shoots from may also be exaggerated in computer generated films. For example, placing a camera in small nook and then panning that camera across a scene is something not feasible in real life. Taking all of these notes in hand, Unseld decided to take a documentary approach to this short. Splicing together only second long clips and shooting from only places where it is reasonable for a human to stand, like standing across the street, Unseld was able to not only create a realistic looking animation, but also a realistic feeling film.

THE BLUE UMBRELLA

Takahashi, Dean. “How Pixar Created the Ultarealistic and Animated Film, The Blue Umbrella (interview)”. VentureBeat. Web. Nov 2013.

Advertisements

Simulating Rapunzel’s Hair in Disney’s Tangled

As seen previously in the analysis of Brave, simulating computer generated hair can be tricky business. While our heroine from Brave, Merida, had her own set of issues with wild, red, curly hair, Rapunzel from Tangled, a film produced by Walt Disney Animation Studios, had a new set of problems with her 70 feet of long blonde hair. Though Rapunzel’s hair didn’t have to exhibit the bounce and curl of Merida’s hair, it did have to look voluminous and sleek. The mere length and excess of the hair also made it difficult to predict and correctly calculate its behavior.

Screen Shot 2013-11-15 at 1.31.36 AM

a simple hair spring particle system

To combat these issues, Disney developed their own proprietary software, dynamicWires, which used a mass-spring system for curve dynamics. Quickly explaining the most basic representation of hair: a single strand of hair can be visualized as a chain of particles connected by springs. The particles are extremely minute in size and close together so that when they are rendered, it looks like a strand of hair. The spring connections give the flexibility and connection between the particles. In a simple system, a single spring is used between each particles, however, the addition of other springs can give more control to the hair. In Tangled, for behavior such as the piling of hair on top of itself as well as other objects and characters, spring forces were generated as segments of hair collided. This was necessary in order to keep the hair looking voluminous as well as provide the frictional force of hair strands moving in relation to itself.

tangled1

In order for the story to look convincing, it had to look as though Rapunzel effortlessly drags her hair behind herself. This required the hair to smoothly glide and follow her movements and stop moving as she stops moving. In reality, that amount of hair would require extreme physical effort to move, and so, Disney couldn’t apply normal physics. Instead, they added a small tangential friction parameter for ground contacts. Now the hair could slide along behind her as a mass without falling apart and spreading outward. To make her hair stop when she stopped, a high static friction for ground contacts was added.

Throughout the movie, Rapunzel’s hair seems to have a life on its own, as Rapunzel uses it to accomplish various tasks. This level of control, while maintaining the natural hair look, took some fiddling on Disney’s part. By placing loose springs between strands of hair, they were able to prevent the hair from going everywhere. If two strands were too far apart, however, these springs would break, which gave some freedom to the hair’s behavior.

The sheer amount of hair to be rendered, as can be imagined, would potentially take years to render fully. In order to speed up the process so that a movie could be produced on time, the hair was simulated as curves instead of the usual spring particle system. Curves take exponentially less time to render than a group of particles.

Overall, using such techniques, Disney was able to achieve all 70 feet of Rapunzel’s hair.

Screen Shot 2013-11-15 at 2.13.12 AM  Screen Shot 2013-11-15 at 2.14.03 AM Screen Shot 2013-11-15 at 2.14.22 AM

Ward, Kelly. Simulating Rapunzel’s Hair in Disney’s Tangled. Walt Disney Animation Studios. Web. Nov 2013.

Methods for Artistic Stylization in 3D Animation

In his thesis, Schmid explores new methods for tools that would give artists greater flexibility over the visual style of their computer animations. He studies stroke based rendering and develops two ways brush stroke rendering, specifically the brush stamping technique commonly used in image editors like Photoshop, can be applied to 3D digital renderings. His first method takes time and space into account to produce high quality images while his second takes hardware speed and capability into account to achieve high interactive rendering performance. These two methods both solve the problem of the depth order and occurrence order of paint strokes. His thesis develops not only only how to generate the 3D canvas that would hold the models, but also how to apply standard rigging tools and apply traditional 3D rendering processes (surface shaders, ray tracing etc).

Schmid defines a brush model as strokes represented by a geometric curve that would in essence act as the centerline of the stroke. From this skeleton curve, a procedural shader or ribbon texture recreates the appearance of a brush stroke. For the 2D platform, the representation of a brush stroke is brush stamping, in which a single brush texture is repeatedly blended along a curve.

Image

The challenges of this technique in 3D, however, is difficult to represent as brush stamping does not capture how the shape and length of paint strokes change over time. Brush stamping also leads to aliasing problems (separated brush stamps causing lack of flow) that may lead to difficulties in rendering the 3D brush stroke. Combatting these issues, Schmid derives an algorithm that results in successful 3D representation results. When rendering brush stroke textures, transparency also plays a large role.

Image

Moving on from brush strokes is canvas texturing. Canvas texture may refer to the texture of paper and enhances elements, such as brush stamps, placed on the canvas. Canvas texture is fixed with respect to the origin. Because “splats”, which refers to the textured quads rendered on the screen, are not fixed, moving brush strokes in front of the textured canvas create a “shower door” effect, in which brush strokes seem to be seen through a distortion layer, like a water droplets on a shower door. Canvas texture enhances the stylistic element of rendered brush strokes.

Image

Schmid builds a 3D painting software system called OverCoat. By using a 3D canvas and his brush stroke techniques, paint strokes are embedded in space. Artists an create modeled proxy objects that define the overall layout of the scene. Geometric detailing is not required as strokes are embedded and the layout will not be rendered. OverCoat is based off mathematical optimization algorithms that I will not delve into as they are fairly complex and difficult to understand as well as explain. In addition to brush stroke rendering and canvas texturing, Schmid also includes various other tools that aid with animation and motion blurring. His software is demonstrated with rendered beautifully rendered 3D images that still retain stylistic brush strokes.

Image

Image

Image

Schmid hopes to continue to build on his work. While OverCoat successfully creates 3D paintings, it also requires much manual work done by the artist in lighting and texture information. In the future, he wants to develop his software so that stylized 3D paintings can exhibit and support view-dependent shape changes that are reliant on dynamic lighting.

Schmid, Johannes. Methods of Artistic Stylization in 3D Animation. Thesis. ETH Zurich, 2012. Web.

2D Animation in the Digital Era

Though relatively short, the history of animation up until now is relatively complex. When animations were first created in the early 1900’s, animators had to hand draw 24 frames, entire scenes, in order to produce a single second of animation. In order to refrain from drawing entire scenes, cel technology was used. Cels, clear acetate sheets, would have line drawings either drawn or photocopied on them and artists would fill in color as necessary. While individual frames would technically have to be drawn out, it was not necessary to draw the entirety of the scene. By overlaying different cels, cel technology expedited the animation process as animators could draw a background or object once, and use it for the entirety of an animated sequence. Only moving objects in the scene would need to be drawn anew. Examples of such animations include Disney’s Snow White and Sleeping Beauty. Disney’s last cel animated work had been The Little Mermaid.

Image

With the progression of technology though, cel technology quickly was seen to be tedious, labor intensive, and difficult. With the development of computer based painting and art tools, animators began to make a shift. In the 1990’s, Disney moved to using the CAPS system. Similar to cel technology in that individual scenes and objects had to be hand drawn, these line art images would be move to an image editing software and digitally painted on the computer. More so now, 2D animators have moved towards the use of tablets and powerful image editing softwares like Photoshop in order to render their animations. Taking a step further, some studios have began moving towards the use of CGI, computer-generated imagery, in order to expedite the production of their 2D animations.

Japan, home to one of the largest 2D animation industries in their world, was a large contributor to the progression of this history with its production of anime. Anime is one of the most popularly watched forms of modern day animation and is a direct successor of cel animation, which is still used to this day. In an interview with Makoto Shinkai, most famously known as the director of Voices of a Distant Star, an OVA (Original Video Animation),  Shinkai explains how technology has been used, and not used, in anime production. He made a point that anime was not moving towards more involved technologies such as 3D. Instead, it is lingering in the era of cel animation in order to preserve the tradition and feel of hand drawn 2D animation.

Shinkai, himself, is a practitioner digital technique in anime. His famous piece Voices of a Distant Star, was a 25 minute short with impressive imagery created in a short 7 months using only a Power Mac G4. The release of this OVA spurred other traditional anime producers to move towards digital techniques. As seen also in his later productions, effects only able to be created with digital tools such as lens flares, vividly colored skies, and dramatic lighting  are witnessed. Shinkai adds that digital tools allow his animations to have more sophisticated lighting and color schemes (for example: global color illumination can be reflected in shadows, refractions etc.). For his newer films he has been using his Wacom tablet, Adobe Photoshop, and After Effects.

Image

In order to further speed up the process of production and cut costs, 2D studios have turned to CGI. Computer generated imagery was generally used for cars, machines, or rigid bodies that only had simple rotation, scaling, or translational transformations. In the beginning, there was very little integration technique of 2D with 3D as the 3D model lacked the level of details that are present in 2D drawings. Shinkai, who also uses 3DCG in his works, notes that in order for the 3DCG to blend seamlessly, careful texture mapping and cel shading needs to be applied. Even so, the “too perfect” movements of the CG animation gives away what is 3D and what is 2D.

Image

Even with improving 2D techniques that make production faster and cheaper, 2D animation is having difficulty with keeping up with 3D. Furthermore, Shinkai notes that anime studio productions are bound by Adobe and what features they choose to include in their software. In the end, however, Shinkai hopes to capture his audience not for the visuals but for a compelling story and concept.

Fenlon, Wesley.”2D Animation in the Digital Era: Interview with Japanese Director Makoto Shinkai.” Tested. 20 Sept. 2012. Web. 10 Oct. 2013.

2D Spatial Design Principles Applied to 3D Animation

In her Master of Fine Arts in the Graduate School of The Ohio State University thesis, Laura Beth Albright develops a “toolbox” of 2D and 3D spatial design principles made for 3D animations. She builds this kit by examining the traditional principles of graphic design and cinematography. Putting this toolbox to use she creates two films, the first combining 3D models with 2D drawings, and the second using 2D line art to generate 3D animation.

Albright splits her toolset up into 2 different categories: direct content tools and indirect composition results. Under direct content tools are concepts including but not limited to color and texture; contract and differentiation; repetition, similarity, and pattern; and lens angle. The direct content tools relate directly to elements that the filmmaker can edit in regards to visual image design. These edits affect the resulting composition. Under the second category of indirect composition results are figure/ ground relationship; staging and silhouette; visual hierarchy; and the such. This second category deals with broader spatial design principles that are directly affected by the use of the first category. It is involved with evaluating rather than editing a composition.

With the defined toolkit, Albright analyzes previously published films that combine 3D computer animated images with hand drawn 2D images. The range of works examined, including Lemony Snicket’s A Series of Unfortunate Events and Flatworld, displays the different techniques and approaches in spatial design used to create illusory depth and flatness. Albright takes from her studies the two ends of the spectrum, deep 3D space versus compressed space, and applies it to her own animated film productions.

Goldilocks, Redirected is the first of the films directed and made by Albright as a part of her thesis. The aim of this film was to achieve compressed space by using tactics of overlaying 2D over 3D models. While the base animation is done in Maya the 3D software, the models seem flat and look like paper cutouts. Applied textures add to the 2D feel. The most important technique used, however, is overlapping. 3D models were made with added depth in certain areas to darken shadows, exaggerating the 2D effect of the models.

Screen Shot 2013-10-11 at 2.17.08 PM

 

Screen Shot 2013-10-11 at 2.19.51 PM

Screen Shot 2013-10-11 at 2.20.31 PM

The second of the films, A Litter of Perfectly Healthy Puppies Raised on Fried Pancakes, presents an opposite goal: the illusion of deep 3D space. Using the line drawings from Thurber’s Dogs, 3D space is implied with contour lines. The line drawings suggest 3D space with linear perspective and overlapping of objects depending on the distances from the camera. Albright uses the 2D line drawings and creates a 3D model that mimics the drawings. Solid cartoon lines are applied to outline the 3D model which is animated and rendered. During rendering, however, all interior detail including shading, reflection and the such, are removed to make the image seem as though it were strictly two dimensional.

Screen Shot 2013-10-11 at 2.21.22 PM

Screen Shot 2013-10-11 at 2.21.54 PM

With the defined toolkit, Albright displays that a variety of effects can be achieved through using a combination of 2D and 3D animation. Her two animations show two ends of the spectrum of what her toolkit can be used for. In the future, this toolkit may also be applied for the making of stereoscopic 3D film. Albright poses alternate possibilities and a new range of what animation may be like in the future, especially with the development of new technology.

thesis link

Albright, Laura Beth. 2D Spatial Design Principles Applied To 3D Animation: A Proposed Toolset For Filmmakers. Thesis. The Ohio State University, 2009. Web.

Paperman: A 2D and 3D Wonder

Recent Academy Award winning Best Animated Short Film, Paperman is a 7 minute romantic comedy produced by Walt Disney Animation Studios and directed by John Kahrs. Having a touching, yet exciting storyline, this short famously combines 2D with 3D animation. Disney pays homage to its past of classical cel animation, in which each frame of an animation is hand drawn, while simultaneously looking towards the future of 3D computer generated graphics. According to his interview, Kahrs wanted to achieve a look that was both realistic and believable, but at the same time held the magic and style of traditional Disney animations. This he achieved by overlying 2D brush strokes over a rendered 3D CG layer.

Because the 2D animation had to move with the CG, Disney had to create a new animation process in order to produce this short. First, the base 3D model and graphics had to be created. The process began as any 3D animation film would with modeling, rigging, and animation. Disney then created motion fields from the computer graphics so that the 2D portion may be mapped to the 3D. The creation of the motion fields required per-frame per-element renders such that each pixel in the image would have a 2D offset. Ultimately, this process allows the flow of the 2D portion.

Preserving the traditional style of Disney drawings, silhouette ribbons had to be created for the characters. Characters were divided into “topologically-cylindrical components by offsetting the silhouettes perpendicular to the camera direction”. This adds the light feel that 2D animations have when characters move. The offset provides a large buffer for the character movement. Further more, an applied paper texture gave the image more depth. Then using the software Meander, line artists drew 2D drawings over the CG renders to generate non-key framed drawings. Motion pasting allowed for the generation of all intermediate frames between key frames drawn by the artist.

Screen Shot 2013-10-11 at 3.03.53 PM

1. 2D Sketch

Screen Shot 2013-10-11 at 3.04.31 PM

2. 3D Compositional Layout

Screen Shot 2013-10-11 at 3.05.19 PM

3. CG Animation

Screen Shot 2013-10-11 at 3.07.40 PM

4. Hand drawn key poses

Screen Shot 2013-10-11 at 3.09.18 PM

5. Hand painted lighting key

6. Hand drawn hair animation layer

6. Hand drawn hair animation layer

7. Completed Hand drawn layer

7. Completed Hand drawn layer

8. Final Composite

8. Final Composite

This new animation process is only a first of many 2D/3D animations to come. What made Paperman stand out was its reference to the past of traditional 2D animation. In the past, 2D animation was laborious and difficult to produce. By combining it with new technology, however, this vintage style can be preserved while still maintaining the desired effects of a 3D rendered animation.

Wong, Raymond.”How Disney used 3D CG to create the 2d world of ‘Paperman’.” DVICE. 25 February 2013. Web.  23 Sept. 2013.

Discrete B-splines and subdivision techniques in compter-aided geometric design and computer graphics

The ability to draw curves in computer graphics is essential to geometric modeling as all surfaces are dependent on their base curves. In order to draw freeform surfaces, both developable and undevelopable, curve geometry must be depicted accurately to avoid incorrect surface interpolations. The two most commonly used curve types are Bézier curves and B-spline curves. B-splines are a special type of Bézier curves but offer more control and flexibility. Both are parametric and used to model smooth surfaces. Though popularly used, using splines to compute surfaces poses issues such as spline surface intersection and accurately rending line drawings or smooth, shaded spline surface models.

Cohen explores the theory behind discrete B-splines in order to understand recursive subdivision algorithms, used for building subdivided surfaces, and to generate new algorithms for non-uniform B-splines. Discrete B-spline curves are defined by the original vertices of a B-spline, the original knot vector (points at which polynomial pieces of a curve connect), and a new refined knot vector. Cohen derives the Oslo Algorithm from two algorithms. The first of the two computes all non-zero discrete B-splines of a larger piece-wise spline of order k, such that its own order is less than k. The second algorithm computes a single discrete spline made up of a linear combination of the fore-mentioned B-spline pieces. Using the Oslo Algorithm, an iterative and recursive procedure, subdivision of the B-spline curve is facilitated. This algorithm has many purposes, including but not limited to computer graphics. Applications include creating new pseudo-knots for the purpose of calculating new control polygons, determining parametric values of spline intersection points, rendering B-spline surfaces, generating refraction and shadow algorithms for B-spline surfaces, and generating non-tensor product surfaces.

The Oslo algorithm is a tool that can be used for non-uniform subdivision of B-splines. The advantages of using non-uniform subdivisions is that it extends the uses of basic subdivision techniques and allows for more flexibility in situations where subdivision is a more convenient solution to a problem.

 http://content.lib.utah.edu/cdm/ref/collection/uspace/id/2641

Cohen, Elaine, Tom Lyche, and Richard Riesenfeld. “Discrete B-Splines and Subdivision Techniques in Computer-Aided Geometric Design and Computer Graphics”. Web. 19 Sept. 2013.