Sunday, September 14, 2008

Linear Workflow for Maya - Mental Ray

This is a rather large one... I hope you don't get too bored reading it;)

Since my platform is Maya/MentalRay... this will focus on that workflow. I'm sure users of MR in other apps can make good use of the info though, with a bit of modification.

The basic idea is simple. Get rid of gamma correction entirely while working in 3d, except for the preview stage. In this workflow, gamma correction is only temporarily applied during preview renders so you can see them correctly on your monitor. When you render final output for compositing, you nix the viewing gamma correction, and render in linear space to a linear format like exr, hdr, or tif(32-bit float). You need the gamma correction while previewing since your monitor responds non-linearly. Usually a simple gamma of 2.2 on the output will correct just fine, but you can use any tonemapping method available to you, if you like. Often people use 1.8 for a slightly more film-like response... so you can change the gamma preview value a bit. I'm using a simple exposure correction node on my camera in Maya to apply a gamma of 2.2.

I'll be using exr files for output, as they are a perfect balance for CG compositing use regarding file size, and dynamic range. They work in a 16-bit float space. This is NOT the same as a 16bit int that you may have worked with in Photoshop. Those files store data still as gamma encoded rgb files, with integers. They are limited in dynamic range, and are not the same as a 16-bit float exr. Float files are stored differently, and are capable of much higher dynamic range. Without getting into the specifics of the file format data, I can quote Christian Bloch's "HDRI Handbook," and tell you that 16-bit float exrs are capable of about 1 billion colors independent from the exposure. The dynamic range (exposure) can span about 32 EV's of dynamic range (stops for the photographers out there). For comparison, a common hdri photograph taken from inside a room looking out a bright window might consist of around 17 or 18 EV's. That's a lot of dynamic range. A typical 8 bit image covers about 6 EV's. The human eye, at any one time (not accounting for the eye's adaptation) can see about 14 EV's. So the exr format is very well suited for linear light storage and manipulation. Sorry for the tangent... but it was a useful one, I think.

Now we get into some specific setups. At the moment, the easiest way to begin is to use the new "physical sun and sky" system. While I won't turn this post into a sun and sky tutorial, this system really forces the user to work in a linear way; since the system is physically-based. You'll find the creation option in the mental ray render globals at the bottom, under the "Environment" rollout.

When you create a physical sun/sky, you'll have an overall multiplier. I often turn this down a bit, since the default can be a bit on the high side. Other settings are not important here, but of course fine tune the look of the sun/sky system.

To work in a linear workflow, you'll want to output float images, so you should change your MentalRay framebuffer to "RGBA (float) 4x32bit" (again, in your mental ray render gloablas). You should also change your Gamma setting here, to .45, as shown. I'll explain. You may have tried rendering a physical sun/sky system without making these changes. You should have noticed how washed out, the textures became, since they are now being used as linear space textures. The gamma that is encoded into them (2.2) must first be removed to function properly in the linear space that sun/sky is now working in. This is a much different way of lighting than traditional cg spots;)

The Gamma setting seen above removes the encoded gamma in all textures in your scene. It does so by correcting them based on a known general rule... that bitmaps are encoded with a 2.2 gamma for screen display. See previous posts for more on that. For them to work in linear space, they are corrected with the inverse of 2.2 which is .45. If you render with these settings, your textures should render with the expected gamma/brightness/saturation that you would expect.

Note: color swatches in Maya assume you are working in a gamma 2.2 space, so they will require manual adjustments with gamma nodes. By default, they will render incorrectly without such attention. See the Linear Workflow Addition 1 post above for more detailed info.

When you created the "physical sun" system, Maya did something behind the scenes for you, to help with a linear workflow. If you graph your camera, you'll see that Maya hooked to it several new nodes. There is sun direction, physical sky, and exposure simple. The important node for us, is the exposure simple node. It is a lens shader, and remaps the rendering from linear to a gamma-corrected image. This is so you can preview output accurately. By default, it is set at a gamma of 2.2, which corresponds to the response of your monitor. So, Physical Sun and Sky automates a step for you. If you don't use the physical sun system, and still want to work in a linear workflow, you must remember to manually add an exposure lens shader to your output camera. The other exposure lens shader as of Maya2008, is the "mia_exposure_photographic." This is outside our current discussion, but if you know about photographic adjustments, then this should be fairly intuitive to use. We'll keep with a simple gamma adjust now.

So we are now in a proper "linear preview" workflow. I'll explain a decent working method. The gain control is an exposure control. It will do the equivalent of changing f-stops on a camera. If you want a more photographically friendly interface to exposure control, you can replace the lens shader connection from "mia exposure simple" to "mia exposure photographic"

The gamma controls the output gamma for preview purposes. 2.2 is fine for a start on most monitors. changing this is a great way to fine tune the overall response and look of your image. 1.8 is sometimes used as it may give a more filmic response/look to the image. You are free to adjust this within reason, to achieve different looks. Pedistal, Knee, and Compression will have to wait for another tutorial, but they adjust white/black points of your image, etc. Most of this is better done in your image editor as final fine-tuing anyway. Getting an image close enough is fine, since we are lighting linearly, and will have a great deal of control in post, where it's easiest.

As a gneral workflow, I generally get my light positions and basic intensities set, without too much fussing over perfection. Then use the gain shown above to fine tune exposure. Obviously you must still pay careful attention to your light settings, such as sample rates, since I'm sure you'll be using advanced features for them.

Also worth mentioning, are light settings in a linear workflow. the physical sun/sky is made more friendly with a multiplier normalized to 1. Pretty easy. But when you create new standard lights you should make them physically accurate lights. The correct MR way of doing this is to add a physical light shader to your light. After doing this, your light settings are "taken over" by the shader. The intensity is now controlled by the color slot in the physical light attributes. when you double click on the color swatch, you'll see that the "value" has been given a large number (1000 I think). This is a start, to compensate for realistic light falloff over distance. You'll find that much larger values (like 100000) might be needed for more distant lights.

Once you get previews you like, it's time for final output. You must remeber that when you output to an HDR linear format such as .exr, you should remove the output gamma correction. This is in the lens shader "mia exposure simple." the 2.2 value should be set to 1, so that no correction is being applied. When you bring the final images into your image editor/compositor, you should manually apply a gamma correction such as 2.2 to get your image into proper monitor viewing space. Now, effects that you add to these float images will be more accurate, such as motion blurs, lens fx, glows, you name it. Given yor compositor (such as Fusion5) is float capable, you can make use of some very accurate fx.

Note: Having spoken to some in the industry that work in linear workflows. They have mentioned that using .45 in the renderGlobals/Framebuffer can lead to problems. They mention that .45 is too general, and can lead to incorrect gamma compensation. Also, gamma correcting the alpha or other mask channels can make for incorrect transparency edges. I would agree in very specific workflows where gamma is being very carefully controlled. The solution for them, is to leave the gamma at 1, therefore not gloabally correcting for texture gamma. They will then insert a gamma node after every texture node in the hypergraph, and correct each one individually with known vlaues. .45 might be used in many of these gamma nodes, but may not when not wanted (like mask channels, or textures with a different known gamma encoding).

I have noted that the problems with this texture-by-texture-gamma workflow is troublesome as well, since it requires much more setup time, and really only works well when you have some scripts to help you en-masse, insert, remove and edit gamma nodes. Having all these gamma nodes added can also severely reduce the hypershade performance, to the point where it lags for minutes. You can solve that with a script to toggle the hypershade updates off temporarily. You can find one I've used for that at djx blog; look for "hypershadePanel.mel." In the end I've decided on a hybrid approach. I'll use a .45 in the framebuffer which works for most textures. When I come upon a mask or texture situation where I need different settings, I'll add a gamma node to those textures only, and apply values to compensate for the differences there. It vastly cuts down on nodes, and allows for specific control when you need it.

Another note. Studios that work linearly, often have pre-render scripts to automatically toggle the lens shader gamma to 1, so the user doesn't need to remember to toggle that after all the preview work. These are often only applied when a render is submitted to the farm. Very smart. Thanks here to TJ Galda for insight on studio workflows.

Good luck folks. I hope to update this a bit more with some detailed posts. It is a large subject. Hopefully this gets you working. Let me know if you have any specific questions and I'll do my best to clarify.

21 comments:

Jason Huang said...

Thank you Andrew and djx. You guys make my weekend with loads of linear workflow in Maya and mental, which is something that many maya user will be highly appreciated.

Zeth said...

Thanks for the post. I've been reading about this stuff all weekend. Quick question tho. . .
Everybody seems to do this a bit differently. Is there a difference between leaving the correction at the framebuffer and removing it from the mia_exposure (as you recommend) or would it be the same to leave the mia_exposure at 1.8 or 2.2 and just render with the framebuffer at 1? It seems like that would give more control and be an easier number to manage (not the inverse of the gamma).

Andrew said...

As I understand it, the Framebuffer gamma has no effect on the output of your image. It only affects the gamma of your textures. Any adjustment of the framebuffer gamma will affect only the look of your textures (and colorSwatches). Leaving it at 1 will make bitmaps with encoded gamma (most do) look bright and washed out. .455 will make them look essentially correct. Lower, they will become darker and more saturated. The lens shader gamma is what pushes or pulls the viewing gamma of the gloabal image. turn this to 1 to remove all gamma correction when outputting linear images. Put it to around 2.2 for previewing images as you're working.

Andrew said...

To clarify... any time you use color swatches (almost always) or textures with encoded gamma, you should set your framebuffer gamma to .455 in order for those colors to look correct, no matter if you use lens shader gamma correction or not. As an alternative you could apply gamma inverse correction to each texture manually, as mentioned above. Some places do this. I haven't found that effort very useful.

Jason Huang said...

Andrew,
As my understanding, the reason you think framebuffer won't affect the output image is because you render to float image. Mental ray won't touch the float image for both input and output. When I render to 8 bit or 16 bit LDR format using framebuffer as linear workflow, I got the same look in Maya Render Veiw as when I use lens shader linear workflow. Meaning that setting framebuffer to 0.455 bring the textures from gamma 2.2 space to 1.0 linear, also "lift" the output LDR image to gamma 2.2 space (the monitor space) for preview. I could be dame wrong and would love to be corrected.

Zeth said...

Ahhh. THAT makes a lot a sense when I look at different methodologies. . . I couldn't quite reconcile Floze's explanation with everything else I'd learned. He (she?) suggested using the framebuffer INSTEAD of the gamma correct nodes. Since i've used the GC nodes I couldn't figure out the relationships to the frame buffer, why it would be necessary. But using the FB will REPLACE using GC nodes on each texture (but will eliminate the need for the lens shader gamma). Do I have that right?

Andrew said...

Jason, it seems you are correct, that when rendering 8-bit (gamma encoded) output... that framebuffer gamma handles the global adjustment as well. Good call. That said, when referring to a "linear workflow" I am making the assumption that your goal is to output a float/linear image. While I'm not 100% sure about Floze's reasoning for removing the camera gamma, I think it's in reference to the final image. Final linear output should absolutely have the camera gamma nullified (set to 1.0). preview work will look incorrect (too dark) at that setting, however.

Jason Huang said...

I think in Floze's tutorial, he removes the camera's gamma is because he is using the framebuffer to achieve linear workflow. Usually in this workflow, one doesn't need a lens shader to "lift" the output image to gamma 2.2 domain for preview in Maya's Render View while test render in LDR format. (If you are rendering to float format during test-rendering stage, you can set camera's gamma to 2.2 for preview) But Floze also use MR's sun and sky system, which automatically connect a lens shader to camera with gamma 2.2 in default, he has to set camera's gamma to 1, otherwise the render will look washed-out. (he is test-rendering in LDR format as I recall)

Zeth said...

Thanks. Great info guys. Appreciate it.

pixelvapour said...

Can you confirm that the frame buffer gamma corrects both colour swatches and file textures. From my tests it only seems to correct file textures. Colour swatches need a gammacorrect node attached to linerize them.

Andrew said...

I'll confirm that you are correct, and I will update my post accordingly. Color swatches are NOT adjusted by the framebuffer setting. You will need to correct them with individual gamma nodes. Which is perhaps why studios commonly use individual gamma nodes on everything. I would still go about limiting them, by allowing framebuffer to handle textures, and users to manually handle swatches. Good stuff. Thanks for the post!

Anonymous said...

mhmm.. if you leave your gamma on the mia_exposure on 2,2 your baking in your gamma and therefore you're not in linear space anymore... or am I missing something completely? should be set to 1,0 me thinks..
cheers
j

Anonymous said...

nice blog tho'.. sorry +)
J.

Andrew said...

I mentioned in the post, that you should remove/negate the 2.2 lens shader to 1 when you are rendering final output. That will avoid baking in the gamma. 2.2 is only used during render previews.

Anonymous said...

ehemmm.... sorry +(

Anonymous said...

So if I had simply a lambert with the colour swatch set to 128,128,128... where would I plug in the gamma node???

Andrew said...

you would use a utility node to specify color... Like blend colors. Then a gamma node then hook that into the color slot of the lambert. Inserting gamma can be sped up with the script I posted.

Jordan said...

"Andrew said...
you would use a utility node to specify color... Like blend colors. Then a gamma node then hook that into the color slot of the lambert. Inserting gamma can be sped up with the script I posted."

-theres no need for an extra node, just swatch your color into the 'value' channel of the gamma correct node. alternatively if you want to sidestep the gamma issue with swatches you can select your colour in the primary shader (in HSV as usual) then change the setting to RGB 0 to 1, then take your calculator and find the square roots of each of the RGB channels and input them instead. this approximates what the Gamma Correct channel does while cutting down on node counts.

Wounded Fox said...

All the solutions offered here seem very bothersome to me. Maya is probably the most widely used CG package to date and it has no decent gamma correction system.

Being primarily an XSI user, I find this very taxing. In XSI, every image can have its colour profile specified without adding any additional node. Also from the global settings, you can set a default profile, such as sRGB, to all new images that are added. Later on you can simply manually change the HDR images and displacement maps to the linear profile.

Since XSI, just like other 'modernized' CG packages, has viewport and material library gamma correction, people do not have to set mia_exposure_photographic to temporarily lift the gamma for preview renders. It can always be left to 1 and viewport gamma can be set to 2.2, which is a much more sensible way to work.

I wonder why people spend so much more money to stay on a so much more primitive system!

Andrew said...

Somewhat agreed. Maya is seriously behind in this respect. But it is such a ubiquitos tool that a few issues have to be stomached. That said, I'd like to post an update soon that simplifies the process. Basically you simply use linear images without gamma baked in. Also, my recent move to vray also simplifies things as well. The addition of color profile adjustments right in the file node is a no-brainer though. Don't know why they havent done that. Even 3dsmax has better lin workflow. Xsi sounds great, but if I'm going to learn a brand new package, I might avoid autodesk altogether and go with Houdini.

mag round said...

Agreed,there have to perfect balance for CG compositing use regarding file size, and dynamic range thus make good effect with cgi backplates.