I promised in my initial review of Fusion 9 to do some speed tests. Blackmagic claims that every tool has been rewritten to offer GPU acceleration with OpenCL. Naturally, that's of great interest to pretty much everyone, especially those of us with access to overpowered Titan X cards. Alas, the news is not so good.
An ongoing exploration of OpenCL Fuses for Blackmagic Fusion. In these articles, I describe my efforts to practice the lessons outlined in the Book of Shaders in the context of custom Fusion tools. These aren't tutorials, per se, but may serve as an introductory guide to the topic.
This is the fourth article in a series on OpenCL Fuse development for Blackmagic Fusion. I am attempting to convert the lessons from the Book of Shaders into working Fuses, learning a bit about programming and parallel processing as I go.
BoS devotes a chapter to "Shaping Functions," which are methods of modifying a gradient. Beginning with a simple linear interpolation (LERP), I'll build up several one-dimensional functions and experiment a little bit more with the control panel.
Since the Fuse is starting to turn into something interactive and fun to play with, I'm going to go ahead and provide the complete code:
That will also make it a little easier to follow along, since this article is over 2000 words long, and I don't want to make it even longer and drier by posting 600 lines of Lua.Continue reading →
The lines between 3D and 2D visual effects tasks are blurring more and more. Nuke and Fusion both have useful 3D toolsets built in, and After Effects users can use the powerful Element 3D plug-in. Most 3D programs have 2D tools built in, and some, like Blender and Houdini, are even capable compositors themselves. For now, 3D and 2D are still distinct roles in most facilities, but each new software release brings the two closer together. A compositor should therefore understand 3D systems in at least broad strokes.
This is the third article in a series on OpenCL Fuse development for Blackmagic Fusion. I am attempting to convert the lessons from the Book of Shaders into working Fuses, learning a bit about programming and parallel processing as I go. I have no doubt that I violate dozens of best practices, as I am entirely self-taught in this area.
This time around, we'll look at how to introduce temporal and spatial variation into the generated image by creating a mode where the image slowly pulses and one where the color is tied to the pixel's screen position, making a gradient.
I have just finished up two days of doing demos and answering questions at Blackmagic's booth on the expo floor at SIGGRAPH. I had a fine time meeting the people behind Fusion and learning a little bit more about the company. Not to mention all the interested compositors, editors, and graphics artists interested in the new software. There was a healthy mix of enthusiasm, skepticism and curiosity.Continue reading →
In the previous article in this series, I introduced the notion of making a series of Fuses based on the lessons from Vivo & Lowe's Book of Shaders. I provided the template I usually start from and a brief rundown of how the parts work. Today, we'll look at the simplest of kernels. As pointed out at BoS, the customary first program people write when learning a new language is one that prints the message "Hello World!" to the standard output. Graphics programming is, by its nature, not well suited to printing messages of this kind, so this "Hello World" program instead fills the output image with a solid color. The OpenCL program is a little longer than the GLSL one given at BoS, but not much. Most of the added complexity comes from the interface between the Fuse and OCL.
- Getting Started
- The Basics
- The Interface and Tracking
- Rotoscoping and Keying
- Clean Plates
- Elements and Effects
- Multipass CG Compositing
- The 3D Workspace
- Cameras, Lenses and Sensors
- Customization and Pipeline
- Anatomy of an Image
- Image Arithmetic
3D: 1) Referring to spaces or objects: Having width, height, and depth. Set up a 3D scene in Maya.
2) Referring to a kind of movie: A stereoscopic viewing medium that gives the illusion of images with depth. This book uses stereo to refer to these kinds of films in order to reduce confusion. Gravity was one of the only movies I thought was really worth seeing in 3D.
Clip: A specific, short piece of video or film. Sometimes a segment of a scene, sometimes only a single shot, but always contained in a single video file or image sequence.
Comp: Short for Composite.
Composite: 1) Noun. An image made from more than one source. These sources can be multiple photographic elements or videos or synthetic imagery. My composite is looking too dark, but when I brighten it the grain looks really bad.
2) Noun. The working document that produces the composite image. In Fusion, the file has a .comp extension. You need to organize your composite better; I can't tell which mask does what.
3) Verb. The act of creating a composite image. When will you be finished compositing that shot?
Compositor: 1) A skilled artist and technician who creates composite images. I am a digitial compositor for MuseVFX.
2) Software used to create a composite, such as Blackmagic Fusion. Sometimes it's quicker to relight in the compositor than to send a shot back to 3D.
Composition: 1) Noun. A working document used to create a composite.
2) Noun. The artistic arrangement of forms within the frame of view.
3) Verb. The act of arranging forms within the frame of view.
Footage: Literally, the length of a segment of film. Colloquially, any piece of video or film of any length. The videographer is shooting some footage today.
Image Sequence: A series of numbered still images that create a video clip when viewed rapidly in sequence. Most visual effects software works most efficiently with image sequences rather than encoded video files. Render that Quicktime out to an image sequence to get better performance in Fusion.
That Voronoi Fuse I documented recently is really slow, and there are quite a few features that I would like to add to it. Unfortunately, each new feature will only make it slower, so I have decided to try once again to tackle OpenCL. This time, though, I'm going to do it a little more formally by converting the algorithms detailed in Patricio Gonzalez Vivo and Jen Lowe's The Book of Shaders (BoS).
The information in We Suck Less' recovered VFXPedia wiki is good, but there are some holes in it, and it assumes a certain level of expertise that I, frankly, do not have. Therefore, this series of diaries is intended to be a little more rigorous for the benefit of dabblers like myself. Enough with the introduction; let's get working!
Sometimes the effect demanded for a shot is more complex that what can be created in the compositor. Computer animated characters, sophisticated fluid and particle simulations, spacecraft, and detailed environments are more commonly created in separate, dedicated 3d software such as Maya, Houdini, or Modo, among others. These programs offer not just better modeling and animation tools, but much more powerful rendering engines. A render engine transforms information about 3D objects into images of those objects. Most 3d programs have their own built-in renderers, but most can also use add-on renderers like V-Ray and Redshift.
In addition to the finished image, the renderer can split the output into a variety of different buffers, or passes. Unlike photography, which can only record the combined light all at once, a renderer can record the surface color (diffuse), reflections, bounce light, shadows, and other properties of the light in separate images, which can then be combined in the compositor to construct the final image. While this may seem like a lot of extra work, it gives you a great deal of flexibility to exactly control the look of a Computer Generated Image (CGI). In this chapter, we'll look at render buffers created by Houdini's Mantra render engine and learn how to use them to control the look of a CGI character.Continue reading →