Unified Sampling – Visually for the Artist

(originally posted by elementalray.wordpress.com)

Unified Sampling – Visually for the Artist

Posted by

Unified Sampling Visually Explained and Practices

As a primer for using Unified Sampling, look here: Unified Sampling

Unified Sampling is QMC (Quasi-Monte Carlo) image synthesis.

Basically: taking samples based on a QMC pattern and decision making process.

How can this benefit your work and how can you take advantage of it?

Comparing stochastic (random) and QMC sampling patterns you can see benefit in how QMC avoids clumping and spreads samples out across the image to catch details. (image) One can also control how these samples are scattered inside the algorithm (stratification). 

The rendering equation and problems with a complex scene introduce a multi-dimensional problem that Unified Sampling helps resolve through a single control called “Quality”. This process is not only in the space dimension (raster space) but in time (motion blur).

So how do you use Quality? What changes occur when you increase Quality?

Quality increases will cause the sampler to concentrate more samples where it perceives the most error. So here I will introduce you to the Diagnostic Framebuffers. For this first explanation we will pay attention to two layers in the Diagnostic: Error and Samples. You can invoke this feature the way you used to, check on the “Diagnostic” box in the Render Settings. (image) Except now mental ray will generate a separate EXR file in the following directory on Windows: [User Directory]\projects\project_name\renderData\mentalray\diagnostic.exr

Open the .exr in imf_disp.exe  Under the Layer menu select the mr_diagnostic_buffer_error layer. (Alt-2)

Several things to note:

  • Error is per channel (RGB)
  • More error is a higher pixel value
  • Mousing over a pixel will provide you with the error value for the pixel (bottom left of the imf_disp window shows values)

You will notice the perceived error centers around noise and contrast changes as well as areas where geometry might meet in raster space (on the screen).

Now what would happen if you increased your Quality? (image)

A further increase? (image)

Notice that the areas with the most perceived (or calculated) error are eroded first. This makes sense, you want to resolve those areas without wasting time on areas that are relatively noiseless. It also gets progressively darker as error values decrease.

Now look at the mr_diagnostic_buffer_samples layer. (Alt-3)

It’s white/blank!?

This is an EXR and the values for samples are integers (whole numbers) If your minimal samples are 1.0 then your values will begin at 1.0 (white) for the buffer value ‘S’ which are samples. So you can lower the exposure in the top right hand corner (image)

I find that -6.0 is a good place to start. Now you should be able to see some sort of grayscale representation of your image. Mouse over these pixels for a value of ‘S’. You can drag the mouse pointer and hold “alt” (Windows) to change zooming levels on the fly in imf_disp.  (The versions on the blog are .png or .jpeg for obvious reasons. These values don’t exist in these web examples.)

Notice these things:

  • Your samples should increase around areas where the error buffer eroded the most error in the frame.
  • With a max samples of 100 you might not have any pixel with a sample rate of 100 if your Quality did not dictate it. (Quality might not have found it necessary at the current level)
  • Your sample values are linear. Unlike other implementations of QMC sampling, it is not exponential (4, 16, 64, 256) This means more efficiency. Instead of jumping to samples 64 from 16, maybe the pixel only needs 23 samples. You avoid over-sampling in large jumps.

What does this mean for tuning a scene?

This means that all you really need to tune an image is the Quality control. With a wide sample range you can push Quality around without sacrificing efficiency.

This has an added bonus: since your sampling is not a rigid function of samples, you can be assured that frame to frame changes in an animation will have a consistent level of quality to them. Even if a new complex object enters the frame, Unified Sampling will compensate accordingly without you needing to change sample rates.

You now have a consistent level of image quality for shading and anti-aliasing. Once you have chosen your desired Quality you can render unattended and return to correct results. (Less tweaking for quality issues and more time sleeping, my favorite hobby.)

So why do I need Samples at all?

Great question, and honestly there may come a day you won’t see a samples control at all.

Samples gives you the opportunity to fine tune a particularly difficult scene or object. You can indeed control individual objects with a sample override. Keep in mind that these values are now literal and linear in fashion, not an exponential power function like before. These overrides can also be outside the sample limits of your scene settings for extra flexibility. (image)

For scenes with a complex or noisy effect, this can give you some added control.

How will this help me with motion blur or depth of field (DOF)?

Motion Blur and DOF are really just noise problems. Unified Sampling will sample these areas where it finds it needs the most samples. What does this mean? Well, in motion blur or DOF there may be areas that are extremely blurry. (image)

This means that a loss of detail would result in needing fewer samples. Looking at a diagnostic you’ll see that very blurry areas may in fact receive very few samples. So the efficiency now extends to all types of problems and dimensions.

So now you understand how Unified Sampling will resolve a lot of problems more easily in your images using a simple, single control.

Using standard workflows you can generally begin with samples min 1.0 and samples max 100.0. These are scalar numbers because samples min < 1.0 will undersample an image. Samples min 0.1 will minimally sample once every 10 pixels for example. Quality 1.0 to 1.5 are generally decent numbers for higher quality renders.

What about a non-standard workflow? Is there a way to take better advantage of Unified Sampling in how my scene is setup?

Yes! In fact, this may be the best way to use Unified Sampling for complex scenes:

Unified Sampling will “pick away” at your scene. Samples are measured against one another and more are generated when necessary, one at a time. You can make your shader settings do the same thing.

Last Example Scene (image)

Note the glossiness. The foil on the walls, leather, and glossy floor. Usually for glossiness we would help the image sampler out by giving the shader additional rays to shoot for a sample called by an eye ray (from the camera). This is also the same for area lights and other effects where local control can be done inside the shader. So imagine an eye ray striking an object and sending 64 rays for a glossy reflection. In a pixel with 16 samples you can expect up to 1024 reflection rays. These rays might strike yet another object and run shaders. . .1024 times. If your ray depths are sufficiently high, you can expect a ray explosion.

Let’s take a look at another Diagnostic Buffer for Time per pixel in this image. It is labeled mr_diagnostic_buffer_time (image)

Where shaders force more work from the sampler they can take longer to generate. This is multiplied by the number of samples that may be taken inside that sampler. In the old version where samples would jump large amounts, your time per pixel could be very expensive in leaps and bounds. Each value ‘S’ for a pixel is render time in seconds.

What if we decided to let Unified Sampling control the sampling. As an overall control for a frame, Unified Sampling can be done in a more “brute force” way. Lower the local samples on the shaders to 1. In this scenario you can strike a pixel maybe 400 times! But in that case the rays sent are only 400 rays. That’s less than the 1024 we might have seen before with just 16 samples! (This includes lights/shadows. For instance, I used 9 portal lights in a scene where I left their samples at ’1′, the resulting frame was still under an hour at 720 HD.)

Crazy!

Here’s the result. (image)

Something here is wrong.

Originally we were shooting more samples per eye ray. In some cases this may have been overkill. But now our image doesn’t look so great despite being faster (3 minutes is pretty fast). Think about it. If my reflection ray count was 64 then a pixel with 5 samples could spawn 320 rays. Well, my samples max of 100 is certainly lower than my 320 rays before (remember, I’m shooting 1 at a time now).

How do I fix this?

You can first increase your quality. 2.0, 3.0, more, etc. Keep an eye on your image as well as your Samples Diagnostic. We have found Samples Quality of 6.0 to 10.0 works in most cases. (This has been greatly reduced in mental ray 3.10, look here: Unified Sampling in 3.10 )

This is also where you will need to increase your samples max. Just like the scenario above where we might need 320+ rays, we need to raise the limit so Unified can make that decision.

But now you may notice something else. Areas without much contrast might gain samples for no visible reason. (Look at the black areas.) How do you fix that?

There is a rarely used control called Error Cutoff.

This control can be used to tell Unified Sampling to stop taking additional samples when the error calculation reaches a certain limit. Anything beneath this is no longer considered for additional samples. You may recognize this type of control from iRay where it has Error Threshold.

This control is very sensitive and I find that most tweaking happens in the hundredths of a measurement. So I begin with 0.01.  In this example 0.03 was a good stopping point. 0.03 is triple the amount of 0.01 but overall just a tiny change in the control. So be careful when tuning this or you may erode Unified Sampling’s ability to sample areas that need it. In many cases it is an additional control and not a requirement, but its inclusion is important in difficult scenes.

Will this benefit motion blur and depth of field?

Yes, a lot in most cases.

Now you might be sampling hundreds of times per pixel. Add motion blur and/or depth of field and the effect is much less expensive now. Unified Sampling jitters these samples in time and space for these effects automatically.

Why is it less expensive?

The extra samples you’re already taking will take into account the temporal sampling of motion blur and the ray direction change (circle of confusion again) for depth of field. So achieving these effects is much less overhead here. You’re already sending lots of rays. All while maintaining QMC efficiency. Areas of blur in motion blur or DOF where a sample strikes a shader will also generate a single sample for each type of effect, lowering the cost of that sample on the edge of blurry detail.

So now you have an idea of how to use Unified Sampling based on visual examples. You should hopefully find that keeping your samples settings wide and using Quality will simplify your tuning and scene rendering as well as making it faster.

The below image is Motion Blur and Depth of Field. Samples Quality 8.0 with Samples Max 600  Rendertime: 44 minutes at 1280 by 720

Additional Notes:

  • Using Progressive “on” and Unified Controls may help you resolve your settings faster but for now I find that I need to increase my Quality more than if I have Progressive “off” when using ’1′ local shader sample. But for look dev you can generate a single pass very quickly to check lighting, materials, etc. all at once. It’s been reported. But in the meantime your Progressive refinements will be freakishly fast! The above image would refine a pass every 9 seconds. So in about 18 seconds I could tell if something was wrong and change it.
  • Using ’1′ local shader sample is more like a BSDF shader where it samples the scene in a single operation. Current shader models try to collect everything from their environment so one sample at a time is possible but not as good as BSDF.
  • Combining features that are sampling your scene smartly will increase the speed of your render and result in higher quality renders. Unified Sampling is the basis that can be improved through BSDF shaders, Builtin-IBL, Importons, and other modern techniques that work together both present and future.
  • What about lighting? Take a look here for some ideas on area lights and Brute Force: Area Lights 101
  • Unified Sampling performance is Logarithmic like many brute force type of techniques. This means increases in Quality result in smaller and smaller render time increases as you get to higher numbers. Brute force rendering tests have shown a gain in speed for similar quality to be about 10-15%, we encourage more tests with this workflow. Others are testing this including studio projects where motion blur is key.
  • Consider using your render output and mr_diagnostic_buffer_time to see areas in your image that might benefit from changes to get a faster render. (Visually insignificant areas that take too long due to expensive effects, lots of shadow rays, etc.) I find the biggest offender for render time are shadow rays in most cases.

Brute force wide gloss reflection, 2 bounces (glossy) with the Grace Cathedral HDR. 11 minutes a frame at Quality 12.

Below: Brute Force Portal lights and Ambient Occlusion. The frosted windows are the most difficult to resolve. 57 minutes for this frame (15 of that is indirect illumination calculation). Model can be found here: ronenbekerman Render Challenge One  You can find recommendations on Area Lights for Brute Force here: Area Lights 101

Samples: actually still low, averaging less than 100 a pixel, many are in the teens.

For further information on [relevant] subject(s) please visit

elementalray.wordpress.com Blog.

Relation, A Look Into Maya’s Connections

In Maya, and almost every other 3d application, tremendous needs for a true, smooth and quick connections could be always felt. In Maya, regardless of its company (Autodesk), since brains are individual and group people and not the brand behind it, these needs are highly responded. One can smoothly saunter through its relative places. In this article I’m going to explore some of its features in bringing forth various instruments to pace out the process.

 

A. Introduction

B. Methods and Instruments

1. Expressions

2. Maya Embedded Language

3. Connection Editor

4. Hyper Shade

5. Constraints

6. Key Frames

C. Unwrap The Engine

 

Maya, let’s relate it to its history (of course I don’t mean Maya Civilization which was invaded aggressively centuries ago), has mainly three head categories for its connecting workflow.

The first one, if broadly name it, slips into language. Linguistic approach that you have through Maya’s options. Two vastly used parts are active within this single field. Expression and MEL. Expression, and offspring of MEL, not exactly the same, but using its rule, and the MEL itself which stands for Maya Embedded Language. Second category belongs to GUI. Connection Editor and Hypershade are in this group. The third group includes Constraint and Set Driven Key Frames, in this group simple keyframing is also exisiting. But I’m not going to talk about it, since it could be explained within Driven Key Frames. This third category, also, relates to GUI, but it differs also from the task-related view.

Expression are highly satisfying when you’ll use them. according to Maya’s own Help documentation, Expressions are:

You can use an expression to animate any keyable, unlocked object attribute for any frame range. You can also use an expression to control per particle or per object attributes. Per particle attributes control each particle of a particle object individually. Per object attributes control all particles of a particle object collectively. ( User Guide > Scripting > MEL and Expressions > Animation expressions > )

 

However this definition may not be adequate for its full understanding, but let’s give you an example. If you have a sphere called “ball” and a cube which is called “cube”, and you want to move cube in X  axis whenever balls moves in Y axis. With following line, added to expression attribute of “cube.tx” you can achieve this effect.

cube.tx = ball.ty;

Expressions also are of high importance, since as you see in previous paragraph, you mention the name of the attribute which means you have a direct and immediate access to it without any specific command needed to be used. Follwing definition can tell you how MEL relates into Maya:

The entire Maya graphical user interface (GUI) is written and controlled using MEL, the Maya Embedded Language. The creation, editing, and deletion of all the graphical user interface elements is done using the MEL language. It then follows that you too can control the Maya interface by using MEL. In fact you can entirely replace the standard Maya interface by using your own MEL scripts. There is often a need for specialized customization of parts of Maya’s interface. For instance, you may want to develop a particular interface that allows animators to set keys without their having to learn the Channel Box or other Graph Editor. You can also hide or remove a lot of the Maya interface elements to reduce the complexity of the interface for certain users (Gould, 2).

 

It’s very easy to work with MEL, it allows you to connect many attributes once to many other attributes. To give you an instance, assume that there are fifty objects, and you need to connect their scale to fifty locators. How much time would be consumed using the regular process of connecting them? If you use MEL you can quickly finalize you process. Let’s say that if you want to uncover how fast it would be, or how faster, you first need to examine the whole process. Connection of fifty plus fifty other object which end up in one hundred hungry object for connection is not a quick-be-done task. Nevertheless, MEL, being handy and capable, decrease the time you need to spend upon it.

In  MEL, you need to call for an attribute, a job which you would not need to employ in Expressions. For instance, you want to connect sphere.scaleZ to locator.translateY; this script is what you need:

//

connectAttr sphere.scaleZ locator.translateY;

//

However if you want to get their attributes for whatever you purpose might be, you need:

//

getAttr sphere.scaleZ;

//

As I have already mentioned, MEL is the most reliable friend in Maya’s environment whom you can trust always. Fifty objects example can recall you how fast it would be. MEL, because of its alliterative likeness with Maya, retrospectively remembers me of Indian Ethical Civilization and Development.

Connection Editor, is a window where you can connect attributes using a GUI. According to Maya’s documentation, Connection Editor is:

The Connection Editor provides node network information in a side-by-side layout where you can view two connected nodes in a node network. This editor is useful for fine-tuning a shading network.  You can quickly and easily traverse from node to node and show a node’s outputs or inputs to facilitate connections, meaning you can make connections in either direction in a node network.

Connection Editor is also useful for animation, and rigging process. It allows you to connect the attribute easily, and its advantage is that you can select amongst many different attributes that you have. You can access this window under the menu Window, and then General Editor, Connection Editor. But as the documentation has already mentioned, it’s great for shading network in Maya.

Constraints are an animation tool which help you connect object’s position, rotation and scale most notably. there are other constraint options such as Pole Vector which is used in cooperation with IK Handle. I don’t want to make details about them, but to give about them an overall explanation. For instance, Point Constraint deals with positional attributes. If you parent constraint object A to object B, the movement of the object A forces the object B to move with. It has options which can give you offset, and also it has an option for each parent node, weight attribute. If you enter the value 0.5 into weight field, with every unit movement of the object A, object B will move half-unit. And if you change it to 0.25, with every unit movement of the object A, object B will move four times slower than object A. Constraint are inevitably used in every serious workflow, they are great. However, simultaneously, with  pointConstrainting object A to B, and weighting it  0.25, we can equalize it within Expression:

objectA.translateX = objectB.translateX * .25;

Here we can see the interconnection of the Maya’s various abilities, such as what we see in equalization of the Constraint using Expressions.

The last part deals with Key Framing. It connects an attribute to a relative change of it during the time within the Time Line. Set Driven Key is another type of Key Framing, but not in accordance with another attribute and not the time. For instance you connect objectA.tx to objectB.ty, and key it with both attributes set to 0. The you set objectA.tx to 10 and objectB.ty to 50, and again key it. This makes the objectB to move 5 unit with one unit movement of objectA. Again you can see that programming it with both MEL and Expression is possible.

In what I have written, the attempt was to uncover a very small part of Maya without tending to be cliche about explanations. Pictures are not used, since I aimed a theoretical representation of content.

 

Maya’s Documentation

Gould, David. Complete Maya Programming: An Extensive Guide to MEL and C++

Mostafa Talebi

mostafatalebi@rocketmail.com