Week 19: Animation, Rendering and Editing

The last shot I need to complete for this week is shot 9, which technically confronts me with more issues than before. Due to the fact that this will be the most ‘skilful’ performance I will be animating, I had many thoughts and considerations about how I would display this.

Shot 9

A key part of this shot again was the limbs detaching in the air and re-attaching upon landing. I also had to plan exactly what dance inspiration I wanted for the rest of the shot, as it had to display the most technical skill narratively. Looking further into ballet dance, I thought that a key associative dance move was the pirouette.

Watching the reference video below, I tried to recreate this with my character, however, came across several difficulties. The main issue I had was getting the timing of the spin correct which also coincided with my stylised animated method. The second issue was that It did not allow for dynamic staging, and I struggled to plan the motion around and ‘trapdoor’. This leads me to consider things similar to ballet that demonstrate associations with flexibility and skill, and I began researching gymnastics.

Inital ‘Spin’ Animation

In order to get this look of flexibility, I decided that for staging and comedic purposes, I thought it would be good to make the character do the splits over the trapdoor, and fall in, allowing time for anticipation and comedic initial defiance of gravity (much like in looney tunes). Looking at the image below, I noticed that gymnastics athletes fall into this position with very gracefully and well-controlled steps, so translating this onto my animation I tried to exaggerate the finger-pointing and chin lift to demonstrate that he is proud of his technical ability. However, Trying to keep the comedic element to the piece, I tried to include the earlier ‘Saturday night fever’ reference.

The Splits

Below showcases my finalised animation for this shot in which I really tried to push for frame rate experimentation, as I animated on 1s to accentuate character fluidity, which is why this shot is my most ‘polished’ piece of animation. I used the same technique at the end to disassemble his body as I had previously in shot 8, however, had a more preconceived understanding of how it would work, which I believe made the process a lot fast and smoother. While technically I can pinpoint several issues with the animation itself, such as

Credit Song

Since I really wish to emanate an animated television show with my film, I decided to compose an end-credit song to coincide with the opening music. Since the film is over, I wanted to add a calmer toned-down piece, which is why the number of musical instruments is stripped. I used the same key and musical theme that has been present throughout the entire film, but slowed the tempo and added a glockenspiel, to give it a nursery rhyme association that is remensient of children’s bedtime stories, bidding the viewer fairwell.

Editing

During the stages of editing, I wanted to add extra text that references the fine print at the bottom of film trailers in the present day, but parody this by adding text such as ‘the skeleton men present’ to almost making fun of the format due to the obvious lack of studio presence in my work.

Initial Edit

Below indicated my first initial edit for the film (excluding the title card and end credits) and indicated several areas of improvement going forward. First is that I need to find a way to apply the black and white film filter options as previously explored, to the whole feature while retaining as much of the render quality and contrast as possible. I intend to do this but upping the temperature and tint, as it will bring forward the colours and contrast, making them more apparent with the black and white filter. Secondarily, I need to complete the start and end credits and add them alongside the film itself. I intend to make two copies of the film, one if black and white and one in colour so that the render quality can be retained in one and visual technologically contextual settings can be played with in the other.

Colour Editing

Week 8: 2.5D Animation Design

Added AI Toon Shader

One of the key parts of the visual success of this shot will be to try to effectively blend the 2D assets with the 3D ones to make a visually convincing and sustains its aesthetic credibility. In order to create a line-drawn effect within 3D space, I researched Arnold toon shaders, which add this exact effect but retain the 3D element of specular texture. Looking into this I found several tutorials that helped me understand that the Arnold render filter needs to be changed to ‘contour’ and line elements can be changed and adapted within the attribute settings, similarly to aistandard surface materials.

Applying the above tutorial to my 3D character model, The line effect works well for the stylisation of my shot, as it blends it into an aesthetic world in which key elements are outlined. Hence why I did not apply the toon shader to background elements such as clouds, as I really wanted to visually accentuate the foreground and key elements to draw the viewer’s eye.

Ai Toon Shader Render Test

I also added this to modelled elements of the midground and foreground for the same effect. However, where this visual look falls short I believe, is the drastic different line quality between my exported tiff drawings and the 3D models. I think going forward with this project, I will 3D model all of my drawings into 3D space, but apply them to toon shader to get a cleaner, higher quality effect that I was not able to reach its full potential during this premise project’s experimental period. I also think that going forward, I should apply artistic lighting and colouring theory to the rendering, as objects in the front should be more saturated than objects in the background, as this ruins its sense of depth.

Ai Toon Shader Test

Completing the modelling process, due to the specific bit of dialogue which expresses a long car journey, I thought I would directly visually convey the speech within this, creating a more ‘observational’ documentary mode, however, place it in a wildly unconventional world to separate its aesthetic tendencies from the mundane. To upkeep this stylistic direction, I modelled the car assets in a very flat and almost ‘cardboard cut-out’ style to maintain and keep inspiration from the shadow puppetry style Lotte Reiniger researched earlier on in the project. The feel of objects being made from plastic (accentuated object specularity) and ‘ cardboard’ 2d stylisation adds a hyper superficial like aspect to the world that once again stresses a visual landscape that forcefully attempts to push itself into fictional realms while tackling non-fictional subjects.

Modelling a 2.5 D Car

Animated Facial Performance

In order to be time efficient during the animation process, and also exemplify stylistic independence and reference to computer-generated imagery, I decided to try and test using the audio node within Maya MASH editor to convey speech. This would take place of creating and animating a mouth rig on the character for the time being, and allows me to explore the visual identity of my potential film.

Mash node editor

By applying the mash editor node to the mesh, it created a computer-generated audio processing effect which makes my character appear robotic, which I think adds a visual performative element to the mix that forces viewers to acknowledge that the character is fake, in contrast to the voice behind it, which will aid in my thesis survey research into the performative re-creation of subjectivity.

Reference for Eye and Eyebrow Movement

After using the real-life footage of my interviewee, I tried to follow and adapt the expressivity of the eyebrows and eye movements, and apply this to my 3d model adaptation of the concept art below. Due to time constraints, this animation came out very minimal and not overtly accurate due to loss of performance in stylisation. However, this has allowed me space for stylistic considerations going forward with this visual exploration and has led me to decide upon making and adapting more detailed facial rigs for future models that are more reminiscent of the original footage of the interviewees. This will potentially include not using the Mash node editor in future, but creating different ‘square’ mouth shapes that will operate with blend shapes to create the illusion of movement. I am also considering applying 2D draw facial elements to 3D models as a stylistic choice in future, which accentuates ‘paper drawn’ visuals.

Character Concept Art
Adapted Animated Facial Performance

After completing the facial animation, I decided that in order to engage audiences with the visual world-building, I wanted to keep the speech as one continuous shot that does not include shots, which also provided the challenge of aesthetic consistency. Going into next week, I will try to merge and asses these aspects to make the aesthetic design work together.

Week 18: Shot Animation

For this week, I have updated my animation and rendering progress on the shotlist below, indicating my work for this week will be focused on shot 8. Due to the technical aspects of the rig connecting and disconnecting within this shot, I have allowed myself the week to experiment and gauge the best method for achieving the finalised look.

Shot 8

Due to my primary dance influence deriving from ballet, I initially started out researching and watching videos of ballet dancers on youtube to watch how they point and transition between different poses. Due to my more ‘staccato’ animation stylisation, I aimed to take key poses from these videos and translate them in a way that does not require excessive keyframes to make kinetic sense.

This image has an empty alt attribute; its file name is image-22.png
Ballet Reference

This was particularly challenging when facing jumps, as the timing and poses had to be very specific to appear convincing on landings while maintaining the poise and elegance of ballet dancers. However, in order to get a better conveyance of weight within the jumps, I animated more frames specifically leading into and out of jumps working as a ‘slow in’ and ‘slow out’ function with the limited animation.

This image has an empty alt attribute; its file name is image-24.png
Ballet Reference

The most challenging part of the shot proved to be the last section, where skeleton b’s body falls to pieces into the hole. Due to the way I rigged the character, I could only detach certain limbs such as the arms/ forearms and hands, as the IK placement make it very difficult for me to make the legs detachable. Due to this, I eventually decided that it would be much more effective to place an unrigged model into the scene, and swap them at the frame the skeleton starts to disassemble. in order to get this to look convincing, I had to ensure that the model was in the exact same position as the rig.

This image has an empty alt attribute; its file name is image-25.png

The next challenge was making the different bones of the skeleton fall convincingly. In my initial attempts, the timing seemed very off, as I was animating on 2’s and the sense of gravity seemed inconsistent with that of the rest of the scene. Improving upon this, I began to animate the different sections on 1’s, as this created a fluency and sense of weight that did not draw attention to itself.

In order to get a rough idea of how these bones will fall, I took a handful of pens and pencils and filmed me dropping them in slow motion, to get an idea of how they fall. The main thing I noticed was how they rotate sightly but all in the same direction. With the longer limbs such as the arms and the legs, the motion will be very similar and it made the application of this to my animation a lot clearer.

During the fall, however, I wanted to make sure that the head was one of the last things to fall out of frame, to stay focused on the character and his expression.

Film Reel Editing

To add further authentication to the time contextual setting, I wanted to create an introductory film reel roll, to give artistic reference to old film projectors. In order to make this as convincing as possible, I went online and found the sound of a film projector warming up, and laid this with edited effects in da vinic resolve to create a film reel look. Once edited into black and white, with noise grain added, I feel this may look convincing and sets the tone immediately for the film’s aesthetic reference.

This image has an empty alt attribute; its file name is image-26.png

Next week’s goals:

Going forward with this technique I will utilise and reference what I have learned from this week’s animation and create a faster and more effective piece in the following week.

. In the next week, I wish to finish shot 9, therefore completing the animation of the film.

. I need to set up and finish at least half of the film renders

Week 17: Scene Animation

Following last week’s shot list creation, I have indicated all of the shots I plan and intend to work on for this week. The shots I will be focusing on are shot 3, 4 and 10, as it will be important to finish and plan both the introductory and final scene of the film so I can get a key understanding of staging, placement and lighting.

Shot List

Style

As used in my previous test animations, I feel that using a style reminiscent of pix-elation will help add stylized and comedic value to my work and suits the overall theme of my film. This also will help me set attainable goals and reduce the amount of time spent in the animation process, making the minute-long film more achievable. A big stylistic inspiration for this is neighbours, as it uses elements of stop motion animation in a different context that adds blunt humour that live-action cannot inherently produce with the same effect. As stated in Norman McLaren: Between the Frames Neighbours led to the creation of “his pioneering stop-motion live-action technique” which is called ‘Pixilation’ to utilise this (Yang, 2020,pp. 167).

I think the ‘staccato’ style to it works really well in creating a particularly visual identity to the piece that does not blend into the world of live-action, drawing attention to itself as a performance due to its inherent abandonment of real-world physics. I wish to create a similar effect that dramatises and differentiates movement in a different way from what is typically expected from CGI animation. I also think a key part of this is due to the control McLaren has over the timing and relationship with the music, and I feel this will be effective in the creation of my own work going forward.

In the video Experimental Film Artist: Norman McLaren (1970), McLaren discusses when working on the film Neighbours, that his use of sound he created animated frames of soundwaves which create different pitches, which are controlled per frame and matched with the film exactly (1970). I think this accentuates that the use of sound with this animated technique is imperative for stylistic endurance.

Another recent piece of animation I noticed takes a similar style is the animated documentary ‘Flee’ which largely seems to have a more limited animation despite its more detailed drawn 2D appearance. There are fewer animated frames that create pauses in movement and facial expressions that minimise movement but do not sacrifice emotional conveyance.

Scene Set-Ups

An important part of the 3D animation pipeline is setting up all of the scenes so that the staging, placement and render settings etc, are consistent between each scene. So my initial starting point was creating a base Maya file for every shot on the shotlist. Going forward, I am aware that doing this process can cause breaks and inconsistencies in a scene where the animation relies on the previous files’ last position, so with that mindset I will ensure that there are different cameras, and files that are saved from the previous shot’s and keyed at the last position.

File Set up

The key element required for every scene is a central, locked camera with the staging measurements in the centre so that everything is in consistent line with each other. The main prop or asset that is required is the ‘trap door’ or black hole in the floor. The key placement of this ‘door’ in shots 3 and shot 10 will be important going forward as they are the start and end frames of movement and points of reference.

Render Set Up

Across all of these scene files, I also tried to make sure the render settings were the same and placed all of the different assets onto different layers that could be combined and manipulated at the end of the post-production compositing stage.

Render Layers

Shots 3 and 5

An idea I have conceptualised is using my N cloth simulation tests to create a curtain opening sequence that adds a further ‘theatrical’ aspect to my project and makes more direct reference to the theatrical setting as well as helping me apply and develop my simulation abilities. By having the curtains drawn open at the camera it introduces the scene in a way that forces its performative status on viewers immediately and slightly breaks the fourth wall by using a curtain to introduce a stage.

Curtain Simulation Testing

Due to the nature of the scene, a Key aspect that I needed to take into account was animating the area lights that acted as spotlights alongside the character. This initially proved difficult as even in IRP render viewports It look a lot of time to understand where it was at each point. In order to attempt to try and match the timing of the animation as much as possible, I keyframed in a stepped stylisation the movement of the lights in the same beat as the skeletal figure to try and ensure it is always illuminated.

Animating the Lights

Below shows a playblast of the finished shot. My main aim with this shot was to understand and adapt how the skeleton’s rig will fall apart and come together under the applications of gravity. For this, I intend to always be aware going forward of the character performance ‘jumps’ as a certain section of the limbs will fall apart and come together. I wish to do this to pay homage to The Skeleton Dance, and also the historic comedic effect of reanimated skeletons that do not have the muscle to hold them together. I also experimented with the idea of varying animated keyframes when I wish to draw the audience’s attention to a certain character. When the character is animated on 1s and 2s, there is normally weight conveyance needed to sell the shots plausibility, and also to draw viewers to the movement. This is why, when skeleton B is dancing in the below shot, Skeleton A is almost stagnant. I think this reflects the work of McLaren on Neighbours, as a similar variation of frame rate is used for different actions.

Shot 4

Shot 4 deals with the only extreme close-ups within my film, therefore requiring the most emotional expression. Due to the fact, that the main characterizing difference between my characters is eyebrow emotional conveyance, I wanted to use this time to stress the difference between the two personalities. Skeleton A is shy and concerned about the upcoming ‘battle’, while Skeleton B is determined and ready to ‘fight’.

Skeleton A

Skeleton B

An editing style I wish to use for this section is derived from old western movies, In which the camera uses extreme close-ups to garner characters facial expressions when they are about to duel. An example of this is from the movie The Quick and The Dead (Raimi, 1995). A key point of reference here is the 180-degree rule within the film which underlines the importance of staging and direction, as the position of the skeletons on stage needs to be dictated by the direction they face. I want to accentuate this effect by creating a split-screen, that will exit the screen on the corresponding side of each character. I could potentially use this opportunity to create an “elliptical Cut” which is a “culturally conditioned film convention ” that allows a large jump in action between shots that still works with screen continuity (Brown, 2011, pp.77).

Shot 10

This shot is the last shot of the film and contains the largest camera move in the whole film. Due to this, I had to ensure that the underground environment was fully modelled. An early issue I had was considering the way I was going to model and build an environmental depth without making the scene too mesh-heavy, making my computer and rendering process crash.

What I found worked for this process was getting the different ‘piles’ of bodies, and duplicating them within a specific viewport so they gave the appearance of being far away from each other, expanding the environment and accentuating the scale of failed ‘dance battle’ attempts.

Underground Environment

Below is the play-blasted, unrendered shot which I will begin rendering tonight, which will provide me with key points in lighting, render time and camera movements, as I have considerations of adding post-production motion blur to my piece.

Next week’s focus will be predominantly on Shot 8, as it involves a lot more intensive movement and animation than these previous shots.

Reference

. Experimental Film Artist. (1970). Mclaren, N and Sloan, W. Footscray, Victoria, Australia: Contemporary Arts Media.

.Brown, B. (2011). Cinematography: Theory and Practice: Image-making for Cinematographers and Directors. Oxford: Focal press.

.YANG, D. Norman McLaren: Between the Frames. Canadian Journal of Film Studies, Fall2020, vol. 29 issue 2. Pp. 165-168. DOI: 10.3138/cjfs-2020-0027

Week 7: Environment Development

3D Style Environment Research

For my 3D style piece, I wanted to look at environments that accentuate character performance with harsher and more stripped and focused lighting that will bring to detail the nuances of the character’s facial performance. Considering this, I thought of contextual applications of real-life interviews to help ground this documentary in its more realistic and serious nature. My initial environmental idea that was conceived in my head was a simple, minimally modelled room reminiscent of a police interrogation room where nothing draws massively the viewer’s attention away from anything apart from the subject that is speaking.

See the source image

In a similar light, I thought the use of an office which includes each individual’s nameplate and objects that seem relevant and symbolic of different elements that represent them as people or what they are saying to the audience to give them a further sense of personality without having to inherently speak it.

See the source image

3D Environmental Concepts

Expanding on this design idea, I started making some simple models of this harsh metallic table and chair, without creating a surrounding environment so it is easier to single in on the character performance.

Rough Table and Chair Model

The main element I wanted to focus on with this environmental piece was the overall lighting. My plan was to make it very intensely focused and bright on the character’s face, by creating a slight downcast to make prominent the sharper points of the face. In the book Aesthetic 3D Lighting: History, Theory and Application, states how “moods are often triggered by particular lighting scenarios” and accentuates the example that stylising lighting of horror movies light can be “placed at unusual locations so that the lights create exaggerated highlights and shadows” (Lainer, 2018, pp. 4). As this is similar to the overall tone I wished to create within my render, I started my testing with the most primary light in the scene. The overhead light.

Very rough First lighting render test

As clearly visible in the above picture, I was able to create an effective-looking mesh light with enough dimness in the bulb to create a gloomier, overcast look. However, I still needed to populate the scene with additional lights to add clarity to the overall setting and massively reduce the noise.

Initial Test Render

Creating a ‘3-point’ lighting set up, I added a main frontal area light to reduce noise and light up the front of the scene, a light on the character’s left, to help increase the shadows cast over their facial features, and a backlight to help form a slight rim light around the silhouette of the character.

Lighting Set-Up

As stated by Lanier, it is important to consider the reflection, transmission and absorption of light (2018). They state how when the wavelength of light interacts with a material surface, the light is “absorbed and the light energy is converted to heat” (this seems to appear on darker surfaces), or it can be absorbed and re-emitted, creating reflection (Lanier, 2018, pp.7). Since the hue of the material tends to be lighter on a reflective surface, I created a backplane with a standard grey tint, so that aspects of the area lights would reflect and light up the back of the chair, creating an object separate from the background and foreground.

Mood Lighting (Background Added for Light Reflections)

When adding my character performance into this set-up, I really wanted to focus on upping the texture specular so that the character would reflect on the surface of the table, exemplifying the idea of coldness and of solitude further.

Scene test with Added Character performance

Below is the finalised look that I was able to create, I wanted to keep the character’s specular level higher to give them a ‘glossy’ and toy-like appearance to separate it from reality in a way that purposely feels performative and intentional. I feel for this premise project adequately portrays the mood and overall feel I am aiming for with visual conveyance. However, if I do decide to push for this visual there will be many more things taken into consideration, such as further detailed texturing of both assets and character, and more extreme lighting set ups (Perhaps with the inclusion of colour theory) to convey further emotional expression.

Strong highlights and contrast


References

.Lanier, L. (2018). Aesthetic 3D Lighting: History, Theory and Application. New York, New York: Routledge.


Week 6: 2.5 D Animation Test Development

Stylistic Inspiration

My initial inspiration for motion and stylistic interpretation derives from 2D ‘puppet-like animations such as South Park and the works of Lotte Reigner. For example, below is the piece ‘Cinderella’ by Reigner (1922) the works rely strongly on the flat silhouette to imply motion. An idea I wish to expand upon outside of my premise project is to create puppet-like movements within 3d space, essentially creating a 2D rig within Maya.

Below highlights the setup for such a 2D rig, In which all of the different manipulatable limbs are separated into layers, in which they will be exported separately and exported into photoshop, where alpha channels will be created. After assembling these into Maya, the rotation pivots will be moved similarly to the motion of the puppet, aesthetically.

Layered Limbs for Rigging

Environment

A big inspiration for the concept of the more cartoony and 2D reminiscent background was work from the series Adventure Time’ and ‘The Midnight Gospel’ which capitalizes on a rounder, softer and colourful shapes to accentuate its otherworldly-ness and fictional nature. I think the contrast between fictional audio and the unrelated crazy magical world that is presented in work such as the midnight gospel creates a very interesting individual use of the animated documentary which Is something I would like to test and consider for my own project. I think, by confronting a viewer with something so crazy and unrelated to the real world, it forces the animated documentary’s performative nature to be so evident to a viewer it does not try to be fictional and embodies the relationship between the real world and the experimental and how they can combine to address something of importance. I feel I can use this contrast to create an ironic distance between people’s time spent locked inside and their address on the mundane and the bright and colourful world they wish or intended to live in during this time.

See the source image
Still From Adventure Time

See the source image
Still From The Midnight Gospel

Environment Drawing

Similarly to the earlier mentioned character, the environment will need to be created in layers, so that a foreground, midground and background can be extracted. This is essential as, in the process of movement, different parts of the environment move at different points due to distance and eye recognition. For this to look achievable I will need to create a ‘parallax’ in which the foreground, midground and background do not move as a single piece of geometry, but move as separate layers to create the illusion of distance.

This image has an empty alt attribute; its file name is BA1DC250-EC78-4A6B-9517-97B75B9243C1-1024x768.png
Layer separation and art

An important consideration when creating the illusion of distance is colour saturation and shadow, as the foreground will be the most focused and saturated, and the opacity and shade will cascade into the distance.

This image has an empty alt attribute; its file name is B31C7487-3780-4316-819B-2EC6EC6C54A7-1024x768.png
Foreground
This image has an empty alt attribute; its file name is 9AE40C24-16D3-4BE3-9529-FAF929EF0535-1024x768.png
Midground
This image has an empty alt attribute; its file name is 548ABD6F-FA19-4C32-AA95-A861FFC30251-1024x768.png
Background

Placing the character into the environment, I want to make it clear to audiences that this environment matches this interviewee’s character by stressing square shapes, that match his face and tonally the whole world is engulfed in tones of blue.

This image has an empty alt attribute; its file name is 1763EF28-6F90-46E2-9914-10610C3E297B-1024x768.png

3D Environment Integration

Placing these into Maya, allowed me to experiment with layering and distance as well as camera angles and movements. For this process, I integrated temporary placeholder 3d objects to allow for early composition considerations.

Layered 2D planes in Maya

I also discovered early on that my drawings were not long enough to cover a large distance of space since I decided to create a shot that pans to the right. In this light, I decided to duplicate and extend the road to allow space for a car to travel downwards since a car journey is directly referenced in one of my interview audios.

Early Camera Tests
Extended Environment

below is the initial playblast test that helped me understand how the camera and environment parallax will work within 3D space and allows rough composition set-up for environmental modelling.

Initial Camera Test

Below are some added 3D assets such as clouds and tree models to help start integrating the 3D assets within the ‘2D’ scene. I think this process allowed me to understand compositionally that some 3D objects will need to be moved to allow for further depth of field, and highlights considerations down the line of using more 3D models than 2D planes to build a more convincing animated world.

Issues and Solutions

While in theory, I understood the concept that PNG files do not render within Arnolds render, I initially struggled to be able to get differing file types such as Tiffs and Jpg to work cohesively with alpha channels in Maya. My initial problem was exporting the alpha channel somehow invertedly, which highlighted several issues with my understanding of exporting photoshop alpha channels.

Alpha Channel Reversed

Starting fresh, I created an image plane, added an Arnold texture and added a UV Planar unwrap. I created a UV Snapshot and opened this file within photoshop.

Following the below tutorial helped me understand where my previous issues were lying, and it was predominantly related to the aistandard surface settings. When extracting an alpha map from photoshop, once loaded into the ‘geometry’ tab of the aistandard surface, it utilises the alpha map and applies the alpha channel to the image within Arnold’s renderer.

Creating Alpha Channel
Applying Alpha Map to Aistandard Surface Geometry
Applying Alpha Cut Viewport
Unticking ‘Opaque’ to Show Planes in Render

After following the tutorial I was able to successfully apply and render the image plane within ‘Arnold’ with the alpha cut applied. This allowed me to go on to create several render tests to understand how I will assemble and blend the 2D and 3D drawings together effectively.

Arnold Render View with Transparency

Below shows the initial render test with some of the stylised models I have created, and indicates some important steps for me to consider going forward next week.

  • Firstly, I think learning and applying the Aitoon shader will help blend the models into the background more effectively.
  • Secondly, Applying earlier 2.5D tests of utilising the 3D model’s adaptation of my 2D drawings will allow me to animate to the lip sync.
  • Assets such as the ‘car model’ need to be created
  • Further render tests
Images Rendering in Arnold

Abstract Lines and Colour

While not explored for my premise project, I have brainstormed potential background ideas that expand on the abstract nature of ‘fantastic’ environmental representation. I could adapt pieces of work by artists such as Kandinsky to create floating shapes and colours to accentuate the contrast between reality and subjectivity in the background of the animated performance.

See the source image
Kandinsky
See the source image
Malevich

Week 16: Pre visualisation and Environment Planning

A key part of my film is the secondary environment of the inground ‘grave yard’ which involved a lot of skeletal mesh. I initially had a lot of considerations to go through when starting this environment, as my first idea was to create the world as a separate Maya scene which I would model and cut to in post-production. However, I found that this process would prove very difficult in selling an effective pan-down shot, and decided to model this environment directly below my stage so that they are part of the same scene file.

For this final shot, The lighting set-up proved to be much simpler than the former, as there only really needed to be two lights to convey visually what I wanted the audience to see. Using a primary, frontal key light with the same intensity as one of the stage settings, and a stronger focused spotlight on the hand, allow for a dark and eerie setting with only one focal point.

Previs

As stated in the book 3D Animation Essentials, “Pre-visualization is a technique used in film and television that utilizes 3D animation to plan the pacing, cuts, and camera angles of a sequence. ” ( Beane, 2012, pp. 114). This step is essential as part of the 3D animation pipeline, as in a typical production it will save a lot of time and money planning out the entire movie without the extended quality and effort. This planning felt particularly useful for my film, not necessarily in the different camera angles due to its flatter, less dynamic camera action, but in planning the timing and spacing of the dance movements for my characters to the sound and music.

Beane also stressed that the pre-vis stage of production is also a key time to understand where you are directing the audience’s eye, and Instead of using camera angles, I have used lights to pinpoint very clearly where the audience should be focusing (2012).

During the process of creating this pre-visualization, I came across several issues which will need addressing in order to fix the finalised look of the film. The main issues I wish to address as the scaling and staging issues, as the skeletons appear much too large on the stage, to the point where several of their movements are hidden behind the curtain. In the next week, I will ensure that my character rig is scaleable so I can resolve this issue without having to make any drastic environmental changes.

The second key issue I wish to fix is the overall look of the ‘underground’ area as I feel it looks a bit flat at present and lacks the environmental depth that will add drama and effect to the piece.

Going forward, I need to ensure my rig is finalised and functioning, making sure there are no issues regarding the parent constraints which allow detachable limbs. And the last improvement consideration will be my main source of stylistic inspiration when it comes to dancing, as I feel the motions highlighted in the previs did not have a clear enough contextual stance.

Animation Test

Looking more time contextually at dance moves heavily related and associated with the 1920s/ 1930’s contexts. Watching the video From Ballroom to Broadway (1980) I was able to gain a further and more contextual understanding of the Charlestons’ historic relationship to theatrics and the theatre, and how the two dancers move and react with each other on stage. Applying this to my characters and timing it to the music had a kind of comedic effect which I enjoyed, especially the idea of mirror image dancing between them. However, this dance has a certain level of complexity and I am not sure how or if I will be able to fluently fit this within my animated piece, especially due to the fact they are avoiding an obstacle throughout the motions.

(108) How to do the Charleston Dance 1930s – YouTube

Shot List

In order to efficiently plan my animation and rendering over the next 4 weeks, I created a shot list in which I can track and break down the different shots I need to produce and finish this 1-minute animated work. I have already made good progress with the introductory title card, and it is completed and rendered. I have also rendered the shots regarding the hamlet reference (as highlighted in previous weeks) and they are ready for editing.

For the next week, I have already started shot 3, and aim to have shots 3, 4 and 5 within the next week.

References

.Beane, A. (2012) 3D Animation Essentials. Indianapolis, Ind: J Wiley and Sons.

.From Ballroom to Broadway. (1980). Publisher: Footscray, victoria, Australia: Contemporary Arts Media.

Workshop: Cinematic Animation

Today’s workshop covered the topic of staging and designing cinematic animation for action movies, and the task at hand was to block out key essential poses that highlight and push clarity in performance. Taking into account methods learned in previous animation workshops, I used silhouettes to truly clarify and understand if the pose is working in its most effective manner considering aspects such as limb visibility. It is important to be able to clearly identify each part of the body within a pose for the best and most effective audience understanding.

For this task, we blocked out a Deadpool rig coming towards the camera and swinging a sword for dramatic effect. We also had to ensure that the camera was animated, to assist in selling the action-charged shot, as the more dynamic it appears, the more successful it proves to be.

Use of Silhouettes for Pose Clarity

Week 5: Character Animation Adaptation

Initially starting the first of my adapted animated interviews, I wanted to use this footage to explore my more ‘realistic’ animated portrayal, as I felt the content in which she was speaking felt more serious and fitting for my initial aesthetic idea.

What I particularly found really interesting about this video is the subtle movements. She does not move around drastically, nor are her mouth movements very pronounced. A lot of the expression is in the eye movement, which I wanted to explore how to communicate emotions with eye movements. Since eye movements make up most of the emotional facial expressions, this should help with selling the performance (Tinwell, 2015).

Blocking

An essential part of the facial performance is the blocking of key jaw bounce/ viseme poses that time correctly with the audio. Due to the context of using pre-recorded footage as a reference, I felt it may be considered insensitive, particularly to the subject issue at hand. This brings to attention one of the slight ethical dilemmas of the animated documentary and its re-enacted nature. Due to this, I tried to interpret the original footage into a somewhat realistic portrayal of the performance.

One of the main things I did after blocking what Kenny Roy describes as the ‘core poses’, I tried to interpret the eye and eyebrow movements from the footage, but exaggerate them ever so slightly so the expression and intention are clearer. However, it came to my attention that one of the main issues is that they move too uniformly together when they should be slightly offset to add more realism. The same can be said about blinking, as this is a technique often used within CG animated films (particularly Pixar) to add a realistic quality. This is something I intend to add to the splineing process.

Smoothing

Creating the smooth and splined facial animation created leeway to add additional details, such as subtle cheek and nasal movements to help sell further the mouth and eye movements. During this process, I also added some slight subtle body movements, as even during a mid-shot the movement of the rest of the body needs to be considered. I think what draws back the realism of the shot is the lack of hair movement, however, the chosen rig did not have easily controlled hair, and the volume would have wildly changed and looked even more unconvincing. Looking into this as a future potential, I have found a few hair rigging tutorials that may be beneficial for the cause. I may also look into using N hair for more detailed simulations in future.

Finished Shot

Referencing the animation in the modelling environment, I had to adjust and re-analysed the movement now that she was sitting down. One of the main issues I had again regarded the rig itself, as their IK switch option did not work correctly, and therefore I could not convincingly place the hands on the table without them following the lower body movement. In order to sell the shot better, I kept the hands up the table and tried to animate the shoulders and arms within FK settings, to give them the appearance they weren’t just plainly following the body. I think this added largely to the unconvincing movements of the body itself, as I felt I could not move it too much without also affecting arm movement. I think in I future I will be warier of my rig choice, and when going forward with the documentary I will most likely model and rig my own characters so that the style is consistent.

References

.Roy, K. (2014) How to Cheat in Maya: Tools and Techniques for Character Animation. Abingdon, Oxon: Focal Press.

. Tinwell, A. (2015). The Uncanny Valley in Games and Animation. Boca Raton, Florida: CRC Press.

Week 15: Character Rigging

Render Tests

After my initial render test, I found that it appeared flat and the lighting did not showcase any of the modelled backgrounds, without any clear accentuation of the character and the skull. In the initial storyboard, I have drawn a dramatic spotlight that falls onto the skull. Trying to translate this into 3D, I was able to create a much more effective and cleaner shot by researching and testing Arnold’s renderer sampling.

Test Render 1
Storyboard

Below showcases the final render I was able to produce as a test for this shot, as due to the new and improved 3-point lighting set-up, I was able to accentuate the background with a soft focus. By raising the sampling of the specular and sss settings, I was able to drastically reduce the noise and create a cleaner, more in-focus close-up shot. While there isn’t a dramatic spotlight which focuses just on the skull, I liked the idea of using the same spotlight that is present in the wide shots, so that the cut between the two did not appear so drastic.

Improved Render

Rigging

In previous weeks, I had begun to place the joints within the skeleton rig, however, after attending a workshop which explained the process of joint placement in a cleaner and more professional manner, I decided to start the process from scratch. This included first of all addressing issues of my model, to make the skin binding process less problematic later on.

Model Cleanup

The first issue I encountered was separating the fingers from the hand mesh so that they could rotate independently which, due to the nature of the model, works with a ‘floaty’ and disconnected aesthetic due to the obvious lack of muscle.

Separating fingers from mesh for better rotation

The secondary aspect I needed to fix was the mesh grouping and overall mesh hierarchies so that they move and function together as they would within a real human body. Due to the numerous issues that I faced and encountered during the collaborative module with the seagull models and rigs, I was able to learn and adapt from these mistakes (such as incorrect transformations) with a better understanding of rigging preparation.

Clavicle Rotation in Human Arm

Freeze Transformation Issue

When setting the freeze transformations on all the different sections of the mesh, I had some issues in which some of the parts of the toes would enlarge considerably and realised it was related to the mesh group’s history.

Mesh Enlargement

In order to fix these issues, I removed all the relevant mesh from the group and deleted the group in the outliner. Then I selected each individual piece of mesh and deleted its history and froze the transformations. When regrouped this removed the issue, and functioned correctly.

Fixed Toe Mesh issues

Before beginning the rigging process, I ensure that all of the mesh worked hierarchically and, and was connected and cleaned correctly so there would be minimal issues later down the line. This is uniquely important to my project especially, as I have plans to use the unrigged model to potentially animate the different limbs falling down a hole. In order for this to happen in a sensical way, I feel as though the main attaching joints that will work hierarchically in the rig should also work hierarchically in the mesh groups.

Thinking about how the upper body Rotates Heirachally

Joint Placement

Starting the joint placement process of my character rig, I started with the spinal area, which is the central point of the joint hierarchy in the pelvic area. From here I manipulated from the front and side viewports, the rough exact placement of each point, but only by rotating the different joints hierarchically to avoid messy translations.

By placing locators at the various joint placement areas, there is a clearer idea and indication of where the joints should be created and rotated, so I completed this process on both the arms and the legs and began to place the joints in a similar fashion as to that above.

Placing Locators For joint Orentetation accuracy and Placement

Rotating Joints into correct placement

A Key point we learned in class was to create clusters so that the central point of the mesh could be snapped to, enabling more accurate joint placement within the inner folds of the joints within the fingers and toes. This was an extremely useful tool that I will always consider during the rigging process going forward.

Creating Cluster to Assist Joint Placement in Fingers

Technical Issues and Solutions

one of the key elements that I had already pre-planned contingency time to figure out was the creation of a detachable rig. Initially, I had issues with considering how I was going to go about it. My first idea was to create a separated rigged leg and attach it to the same group as the rest of the body. However, in doing so, all that happens is the joints join the rest of the rigged hierarchy. Since this was not the intended outcome, I explored the use of different parent constraints that would allow the joints to follow the main hierarchal joint without being inherently attached to it. This way I could skin weight the mesh to the joint, and have it detach from the model with the same control.

Parent_Constrain to make Detachable

I also repeated this parenting process with the upper arms, so that they could be detached in a similar fashion, and once all the joints were functioning correctly hierarchically, I was able to start thinking about IK handles.

Completed Joint Placement

IK Controls

A Key issue I found early on in the process of creating the Leg IKs is that it limited how detachable I could make the rig, as the lower legs would not have the same individual control as previously. However, The leg itself where it was joined at the hip could still be detached, and I viewed that as successful due to the fact it would not be highly necessary for a vast majority of the film.

Functioning Leg IKs with pole Vector Controls

when creating the foot rig, we learned in a class workshop that by layering different groups together and designing with different functions, you could create options such as toe taps and foot rolls. Applying this to my rig will prove useful long term, as a lot of my planned dances for the characters derive heavy influence from ballet, which required the toes to be expendable for animating purposes.

Groupings with different functions

Issues and Solutions with Legs

One of the initial issues I encountered with the IKs in the leg was the exploration of using two joints instead of one, in an attempt to make them detachable. My initial theory was that, if I created two joints at the knee, I could skin weight the surrounding mesh of the femur and Tibia (and Consequenctly the fibia) to have 100% influence on each. while they functioned in FK mode, Within IK it caused several issues with the knee, as the use of several joints caused an unnatural and strange bend that would not have worked. Fixing this, I decided to focus on the upper leg is detachable, while the rest of the leg functioned like a normal IK System.

One Knee Joint works more effectively with IK

Tutorial: Rigging an IK Arm in Maya – YouTube

In a similar light, I began to have the same issues with the arm Iks. The initial problem I seemed to be having was that the arm IK would not bend when manipulated. My initial thoughts were that perhaps I had used a single-chain resolver. However, This did not appear to be the case. Looking at the tutorial linked above, I found that with the addition of an extra joint in the forearm, I was able to connect an IK handle to the joints from the upper arm to the new joint and it would bend in the correct way. However, It is imperative to change the centre point of the Ik handle to the wrist joint, so it is easily manipulatable.

Arm Ik Join Addition

The Iks within the arm though however, did not allow me to be able to detach the joints effectively, and even with an IK/FK switch I realised that the task would be time-consuming and was not necessarily needed, as the arms do not make contact with the ground or objects that really require IKs. In this respect, I made a list of videos that will be useful for this pursuit when I find it necessary to learn and try.

Skin Weighting

Initial issues in which normal skin binging options did not effectively apply to my skeleton character, as the influence needed to be directly affecting each joint 100% in order to act and move effectively like real bones rather than skin. In order to do this, I selected each joint and added a weight flood to the areas (especially the detachable ones) in order to get the correct impression of movement.

Fibia detachment due to incorrect skin weighting

Since I had initial issues in which the ribs were moving independently from the spine, I attached the influence to the part of the spine that connects with the clavicle and arms so that they can twist and move as a unit when it comes to placing constrain controllers.

flooding the ribs influence to a spinal point so they follow correctly

Below highlights the skin weighting and how it functions with twists in the spine, as it was the only part of the mesh I did not apply 100% influence, to allow space for mesh twisting and bending.

The skin weighting methods proved effective, particularly when creating detachable arms and legs, and I was eventually happy with the outcome and the result of my trial and error.

Removable Joints

Constrains/ Controllers

The last step I had to cover was creating the join controllers, as I had to ensure that they were attached correctly in order to manipulate the detachable elements without disturbing the skin weight painting.

When adding controllers, I feel an important step as a rigger Is to consider the transform and rotation limitations so that animators do not break the rig. In this respect, I added several limitations to the hip controller primarily so that the legs would not start reacting incorrectly regarding volume.

Limiting Information

In spite of the issues I had with the leg control, I found I eventually had to opposite problem with the arms, as the extra join that separated the upper arm (humerus) and the lower arm (Radius and Ulna) was required for correct skin weighting in order to make them detachable. This part was essential for my shot animation, so after the addition of this, I committed to FK animation in the upper body.

Bending Incorrectly, added extra joint

In the end, I was able to create a fully functioning rig that was capable of detaching itself the best I could manage to get it within the time contains. While it is not perfect and could be explored much further in terms of IK/FK switches and More extensive facial controls, I feel the rig will suit just fine for the animation and animation style I intend to create going forward. Next week I will finally begin the animation process and will create some animation tests to push the rigs limitations.

Completed Rig