Sometimes the effect demanded for a shot is more complex that what can be created in the compositor. Computer animated characters, sophisticated fluid and particle simulations, spacecraft, and detailed environments are more commonly created in separate, dedicated 3d software such as Maya, Houdini, or Modo, among others. These programs offer not just better modeling and animation tools, but much more powerful rendering engines. A render engine transforms information about 3D objects into images of those objects. Most 3d programs have their own built-in renderers, but most can also use add-on renderers like V-Ray and Redshift.
In addition to the finished image, the renderer can split the output into a variety of different buffers, or passes. Unlike photography, which can only record the combined light all at once, a renderer can record the surface color (diffuse), reflections, bounce light, shadows, and other properties of the light in separate images, which can then be combined in the compositor to construct the final image. While this may seem like a lot of extra work, it gives you a great deal of flexibility to exactly control the look of a Computer Generated Image (CGI). In this chapter, we'll look at render buffers created by Houdini's Mantra render engine and learn how to use them to control the look of a CGI character.Continue reading →
In the previous chapter, we created some fairly sophisticated behavior using expressions. Setting up such systems takes some time, though, and if you build something that you like, you may wish to use it again later on, either for its own sake or as a component for something even more complex. Fusion allows you to create macros, which are collections of tools that can have customized controls and be reused. The Nuke equivalent is a gizmo. After Effects doesn't have a direct analogue, although both presets and templates fill something of the same purpose.
Fair warning: This chapter is going to get a little technical, and we'll even be writing a little code. I promise it won't hurt much, and when we're done, you'll be that much more valuable as a compositor!Continue reading →
In my efforts to expand my capabilities in Fusion, and in visual effects generally, I have taken my first steps into the creation of Fuses. A Fuse is a custom tool for Fusion written in Lua. It's sort of a half-way point between Macro and Plug-in. It's more flexible than a Macro because it doesn't rely on existing tools—so the Spline View won't be cluttered by a hundred LUT controllers from Custom Tools. Unlike a Plug-in, it doesn't require compiling, and it will run in the free version of Fusion 8.
This particular Fuse creates a Voronoi segmentation:
Thus far we have talked a great deal about how to integrate footage together, how to manipulate it, and how to cut it apart and sew it back together in a new configuration. What about creating effects that the production didn't shoot?
There are generally two ways of approaching an effect that needs to be added to a shot. Usually the easiest and simplest method is to find some kind of prepared element. I doubt you will find a VFX shop that doesn't have several collections of such elements in its library. For my occasional freelance work, I have a subscription to Digital Juice and a collection from Video Copilot called Action Essentials 2. These provide me with enough stock footage and elements to cover most of my needs. If you have access to a decent camera and a place with controlled lighting, you can also frequently shoot your own elements. This can be as simple as making blistering vampire flesh with hot sauce and baking soda, or as complex as setting up a cloud tank.
The other way is to create custom effects either in 3d software or in the compositor itself. Fusion has a powerful particle system that works in both 2d and 3d as well as many image and pattern generation tools to generate unique effects that can be driven by your footage. In this lesson, we'll use an element to give James' gun a muzzle flash and a particle system to create smoke streaming from the barrel. We'll also add some interactive light on his face and hand to better sell the flash.Continue reading →
A quick little macro. A couple of days ago, I needed a gradient map, a la Photoshop. My brain was on the fritz all week, so I asked my excellent coworker Joe Laude for help. He did something with a Displace node that bent reality a little for me. I honestly still don't quite understand what's going on in his set up, although it definitely solved my immediate problem. After a couple of days of recharging, I figured out a more straightforward (though possibly slower) method using my favorite node: the Custom Tool.
The Problem: Assign a color from a linear gradient to an image based on the brightness of the pixels in an input map. I want to be able to manipulate the gradient with the same level of control that I get from a Background tool, mask it, and have a standard Blend slider.
The Solution: For each pixel, I need to evaluate its brightness and use that to pick a color from the linear gradient. As it happens, I solved this problem when I built my own Texture tool almost two years ago. The get function described there (look for the "Big Green Marker") does exactly what I want. All I have to do here is substitute a user-defined Gradient Map for the UV Map and a linear gradient image for the Raw Diffuse image.
Just to keep everything on the same page, as it were, in the Color Channels Expression fields, I place the following code (consult the Fusion Tool Reference, page 459, for more details on this function):
That fetches the RGB information at x- and y-coordinates equal to the appropriate channel in the Map Input. Generally speaking, you should feed a grayscale image into the tool, but it can be biased by giving it a full color map, which serves to preserve a little bit of the tint of the original image.
It's subtle, but you can see a little bit of the green from the bushes on the right behind Christina's shoulder, and some of the pink in her make-up. (Apologies to Christina for bogarting this pic—it makes me smile every time I look at it, so I often use it for testing workflows.)
After a day's thinking about it, this is much easier to accomplish by simply using the Texture tool. Just copy your brightness map into the UV channels. Put that into the Input and the gradient into the Texture input.
And then the kind people over at We Suck Less pointed out that the same thing can be done with the Fast Noise tool by plugging your brightness map into the NoiseBrightnessMap input, though it has to go through a Bitmap node first to convert it to a single-channel image. This method also allows some of the FastNoise's details to be mixed in.
Painting clean plates is another task commonly assigned to junior compositors. While roto is an easy and tedious job, paint can be very challenging. Every shot brings unique challenges, and many will require you to use a variety of techniques. A shot might require paint work in order to remove wires or other rigs that were necessary for the shot but shouldn't appear in the finished work. Sometimes lighting or sound equipment is visible accidentally and needs to be removed, or it was impossible to shoot a fantasy piece in the countryside without telephone wires visible in the background. Perhaps an effect calls for a character to dissolve into mist, and you need to create whatever set should have been behind them.
Although I use the term "paint," the Paint tool is usually not the first tool I reach for when doing this kind of work. If you attempt to touch up an image frame-by-frame by painting on it, you will almost certainly introduce chatter—pixels that dance and flicker—because it is virtually impossible to repeat a paint stroke from frame to frame, particularly if the image being painted is also moving. Transforms, warps, image filtering, and other methods are all used to supplement the Paint tool when the time comes to remove something from the frame. Let's get to work.Continue reading →
The creation of mattes from moving footage is a crucial, though sometimes tedious, part of making visual effects. It is a rare shot that does not require a matte of some kind. Let's first talk about what exactly a matte is and a few methods that can be used to generate them.
First, the words matte and mask are used mostly interchangeably, although matte is not often used as a verb. A matte is a single-channel image that is used to isolate a part of an image, either to prevent it from being changed or to restrict changes to only that area. You might come across the term "traveling matte," which simply means the matte image moves, probably to follow whatever object in the scene it is meant to isolate. You might also run across reference to an "articulated" matte or roto, which usually means a matte made up of several pieces that can move independently to cover something complex, like a person.
Rotoscoping (roto, for short) uses splines, like the Polygon tool we used in the Basics lesson, to create outlines around a subject. It is a time-consuming job that is frequently assigned to junior compositors or dedicated roto artists in order to save the mid- and senior-level artists (and their higher salaries) for more demanding tasks. As such, it's an important skill to learn to do well and quickly if you want to show your worth to a visual effects studio.
Keying uses the color values in the image to try to separate the subject from the background. Although the most obvious use for keying is isolating actors on a green or blue screen stage, it can also be used on any object that is of a different color than whatever surrounds it. In the previous lesson, we could have used a keying operation to make a matte for the orange gun barrel and possibly skipped the tracking step entirely.Continue reading →
Before we get into the lesson on Motion Tracking, let's take a walk through more of Fusion's interface. We have already had an overview of the Viewers, the Flow view, the Tools view, the toolbar, and the playback controls. There are quite a few other views and functions that you will find very useful as you grow as a compositor. Let's start with all of those buttons at the bottom of the Viewer:
Starting with the upper-left, we have the SubView button, labeled "SubV." This button turns on an information panel overlaid on the Viewer, as you can see in the upper-right corner of this image. Right now, it's set to display a waveform monitor, a very useful tool for viewing the luminance range of your image. The small triangle next to the button activates a pop-up menu, which you can use to choose the specific SubView you want to see.Continue reading →
As of the publication of this post, I have $81,411.15 in student loan debt, courtesy of the Art Institute of Colorado (whose parent organization, Education Management Corporation, has settled at least one large lawsuit alleging student loan fraud.) The debt paid for two years of tuition, supplies, room and board at a private, for-profit art school and was supplemented a bit by grants and scholarships. My back-of-an-envelope calculations suggest that a four-year degree at a state University would cost a similar amount. A student attending an in-state public University and living with their parents would spend about $36,000. A student living in a dorm at an out-of-state public University could spend upward of $135,000¹. So $80,000 does not represent an atypical debt for a student graduating within the last five years. I therefore consider myself well situated to talk about this topic.
It might be expected that I would be all for any plan to forgive student debt. Who wouldn't want $80,000 added to their net worth in one swell foop? The trouble is that such a move is just moving beans from one bowl to another. It relieves borrowers of their responsibility for their choices (bad, in my opinion) and potentially serves to increase the federal budget deficit (bad, in almost everyone's opinion). It would also spur an even bigger money grab by certain Universities. It's a bad idea, and not one that I see ever making it through the legislature, even if the Democrats had locked away the election.
I think, however, that there are some very fair ways of helping to reduce the burden on students.Continue reading →
The following article is a chapter in a forthcoming compositing textbook. There are references to chapters and appendices that have not yet been written. As those chapters are completed, I will link to them. For now, please be patient.
Last time we covered importing footage with Loaders, handling color space with the Gamut and CineonLog tools, the Viewports and Time Ruler, and rendering out the finished product with a Saver. There wasn't much in the way of actual finished product to be had, though, so this time around we'll construct a simple composite and learn about some of the most important tools in the compositor's kit: Merges, Color Corrects, Transforms, and Masks.Continue reading →