In this tutorial I will go through “how to design and create Charts in Maya”, I use a script to accomplish the goal, and after that a survey is done to get all rendering necessities for a nice look. This tutorial is not a just-finish-video but a true-explanation-video that gives you enough information to build the same chart. However I presume that you know Maya basics, still if not,, wouldn’t bring you much difficulty. One last point to add would be the quality of the video which is aimed toward making a commercial-friendly appearance.
I had taken these textures about one year and half ago. These would be very useful for CGpeople to employ them for their tasks. I did not do even an ounce of modification and enhancement on the images due to keeping them completely raw, and abandon them to your own desire. Thus I have abdicated modification-prior-to-post anymore.
Here you can download the brick textures stored in a RAR file.
and here you can have a snapshot of the three textures.
I have rendered an animation of a chart made using my script MT Chart Design, you can watch it here if you need any result prior to your usage.
A tutorial is ready to be uploaded.
Here is a snapshot.
Download MT Chart Designer Version 188.8.131.52(4shared.com)
MT Chart Design Version 184.108.40.206
or you can…
Download MT Chart Designer Version 220.127.116.11(creativecrash.com)
MT Chart Design Version 18.104.22.168
Download MT Chart Designer Version 22.214.171.124
MT Chart Design Version 126.96.36.199
Animating and Designing Charts for various puposes now is possible
with one click and several simple settings. You may
render them with high quality shading to boost the
beautiness of your final production.
Column would make for each data point a column, and cube
would do so with cube instead of the column. A curve type would be enabled
in future, however if I receive comments requesting this to make
myself sure that updates are welcomed.
Name: A namefield is used to allow users to keep a name for all of
their created nodes.
Frame Range: Since one of the principal usage of this script is to make
the Datas grow over time, frame-limits are presented for the user to choose
Number Of Data Points are the number of statical columns/cubes you need.
Up Limit: As the name is explanatory, it is used for vertical limit of the chart.
Unit: After how many value a number must be used for vertical range. Try to use
numbers greater than 10. Still all numbers are possible.
Offset: Next to vertical bar, numbers are used with markers, this number set the space
between numbers and the vertical bar.
Data Points: Entering a number for your respective data point.The value entered must
be less than Up Limit number, otherwise process might malfunction.
For each data point a name field is used for making the appropriate
Under the Structure Tab there are several options:
Range Value Text Scale: Scale of the number next to the vertical bars.
Spacing: Space between various data points, changing this option affects on the whole chart. Use this option to make your chart desirable.
Base Final Offset: The space from the last data point to the end of base platform.
Range Bar Thickness: The thickness of the vertical bar.
Base Platform Thickness: The thickness for the base platform.
Data Point Scale: Scale for the columns and Cubes.
Data Text Scale: Scale of the texts below each individual data point, retrieved from UI.
Data Text Z Offset: The Z translation of each created Data Text.
Now I have completed my gravestone base mesh. A little dilemma I had when choosing what to model. A guy at CGTalk mention irrelevant nature of choosing [christian] gravestone for showing Indian slaughter; I, however kept my head in the same direction since this irony adds some magnificent quality to the work, still keeps the saddening nature.
After a period, now I have the time to plan my new project. This project is environment-based and is titled “The Murdered Chief” which would probably be an integration of Maya and Mudbox.
After The Green Redemption , still I want to keep my head in the environment, and enhance my texturing / envSculpting abilities more. In the previous project I had learned many points that it accounts as my most fertile project.However these are only technical explanations, while my deep sympathy for Indians always tarry. I would explain it in the appropriate sections.
I update the blog and have allocated a specific category to it to update whenever I had time to; in the next week, I will travel to north (of Iran) where I can shoot many photos of nature and natural elements to bring them into the project.
This script is useful for all people in Maya, since you can use it for clusters and
deformers to plants and grass distribution and sometimes
even for emitter population.
-You must first select your curve and then your node/object.
-There is an “Instance” check box, if checked the distributed
objects would only be an instance of the original object. A great time saver.
-In this UI select “Get Selection” to insert them into UI’s relevant position.
-There are randomization options that you can use them to make your distribution look much more
-In the bottom you have a Twist control where you have control over your rotational properties.
Twist has two methods:
Increment Based(recommended): It uses a value (from the slider in the UI) to add to each instance up to the last
360Based: It uses the number of distributed nodes, and for instance if you have ten objects distributed,
it would divide 360 by ten and uses the result which is 36 to increment each time. This method
is recommended in high quantity distributions.
You can Download it from here:
After three weeks of direct and indirect work, I eventually finished the project.
I used Autodesk Maya, Mental Ray, Autodesk Mudbox, Adobe Photoshop, Adobe After Effects
I used maya PaintFX to populate plants through the scene and integrated them into Mental Ray. If several people ask for, a making of would be available soon…
Unified Sampling – Visually for the Artist
Posted by David
Unified Sampling Visually Explained and Practices
As a primer for using Unified Sampling, look here: Unified Sampling
Unified Sampling is QMC (Quasi-Monte Carlo) image synthesis.
Basically: taking samples based on a QMC pattern and decision making process.
How can this benefit your work and how can you take advantage of it?
Comparing stochastic (random) and QMC sampling patterns you can see benefit in how QMC avoids clumping and spreads samples out across the image to catch details. (image) One can also control how these samples are scattered inside the algorithm (stratification).
The rendering equation and problems with a complex scene introduce a multi-dimensional problem that Unified Sampling helps resolve through a single control called “Quality”. This process is not only in the space dimension (raster space) but in time (motion blur).
So how do you use Quality? What changes occur when you increase Quality?
Quality increases will cause the sampler to concentrate more samples where it perceives the most error. So here I will introduce you to the Diagnostic Framebuffers. For this first explanation we will pay attention to two layers in the Diagnostic: Error and Samples. You can invoke this feature the way you used to, check on the “Diagnostic” box in the Render Settings. (image) Except now mental ray will generate a separate EXR file in the following directory on Windows: [User Directory]\projects\project_name\renderData\mentalray\diagnostic.exr
Open the .exr in imf_disp.exe Under the Layer menu select the mr_diagnostic_buffer_error layer. (Alt-2)
Several things to note:
- Error is per channel (RGB)
- More error is a higher pixel value
- Mousing over a pixel will provide you with the error value for the pixel (bottom left of the imf_disp window shows values)
You will notice the perceived error centers around noise and contrast changes as well as areas where geometry might meet in raster space (on the screen).
Now what would happen if you increased your Quality? (image)
A further increase? (image)
Notice that the areas with the most perceived (or calculated) error are eroded first. This makes sense, you want to resolve those areas without wasting time on areas that are relatively noiseless. It also gets progressively darker as error values decrease.
Now look at the mr_diagnostic_buffer_samples layer. (Alt-3)
This is an EXR and the values for samples are integers (whole numbers) If your minimal samples are 1.0 then your values will begin at 1.0 (white) for the buffer value ‘S’ which are samples. So you can lower the exposure in the top right hand corner (image)
I find that -6.0 is a good place to start. Now you should be able to see some sort of grayscale representation of your image. Mouse over these pixels for a value of ‘S’. You can drag the mouse pointer and hold “alt” (Windows) to change zooming levels on the fly in imf_disp. (The versions on the blog are .png or .jpeg for obvious reasons. These values don’t exist in these web examples.)
Notice these things:
- Your samples should increase around areas where the error buffer eroded the most error in the frame.
- With a max samples of 100 you might not have any pixel with a sample rate of 100 if your Quality did not dictate it. (Quality might not have found it necessary at the current level)
- Your sample values are linear. Unlike other implementations of QMC sampling, it is not exponential (4, 16, 64, 256) This means more efficiency. Instead of jumping to samples 64 from 16, maybe the pixel only needs 23 samples. You avoid over-sampling in large jumps.
What does this mean for tuning a scene?
This means that all you really need to tune an image is the Quality control. With a wide sample range you can push Quality around without sacrificing efficiency.
This has an added bonus: since your sampling is not a rigid function of samples, you can be assured that frame to frame changes in an animation will have a consistent level of quality to them. Even if a new complex object enters the frame, Unified Sampling will compensate accordingly without you needing to change sample rates.
You now have a consistent level of image quality for shading and anti-aliasing. Once you have chosen your desired Quality you can render unattended and return to correct results. (Less tweaking for quality issues and more time sleeping, my favorite hobby.)
So why do I need Samples at all?
Great question, and honestly there may come a day you won’t see a samples control at all.
Samples gives you the opportunity to fine tune a particularly difficult scene or object. You can indeed control individual objects with a sample override. Keep in mind that these values are now literal and linear in fashion, not an exponential power function like before. These overrides can also be outside the sample limits of your scene settings for extra flexibility. (image)
For scenes with a complex or noisy effect, this can give you some added control.
How will this help me with motion blur or depth of field (DOF)?
Motion Blur and DOF are really just noise problems. Unified Sampling will sample these areas where it finds it needs the most samples. What does this mean? Well, in motion blur or DOF there may be areas that are extremely blurry. (image)
This means that a loss of detail would result in needing fewer samples. Looking at a diagnostic you’ll see that very blurry areas may in fact receive very few samples. So the efficiency now extends to all types of problems and dimensions.
So now you understand how Unified Sampling will resolve a lot of problems more easily in your images using a simple, single control.
Using standard workflows you can generally begin with samples min 1.0 and samples max 100.0. These are scalar numbers because samples min < 1.0 will undersample an image. Samples min 0.1 will minimally sample once every 10 pixels for example. Quality 1.0 to 1.5 are generally decent numbers for higher quality renders.
What about a non-standard workflow? Is there a way to take better advantage of Unified Sampling in how my scene is setup?
Yes! In fact, this may be the best way to use Unified Sampling for complex scenes:
Unified Sampling will “pick away” at your scene. Samples are measured against one another and more are generated when necessary, one at a time. You can make your shader settings do the same thing.
Last Example Scene (image)
Note the glossiness. The foil on the walls, leather, and glossy floor. Usually for glossiness we would help the image sampler out by giving the shader additional rays to shoot for a sample called by an eye ray (from the camera). This is also the same for area lights and other effects where local control can be done inside the shader. So imagine an eye ray striking an object and sending 64 rays for a glossy reflection. In a pixel with 16 samples you can expect up to 1024 reflection rays. These rays might strike yet another object and run shaders. . .1024 times. If your ray depths are sufficiently high, you can expect a ray explosion.
Let’s take a look at another Diagnostic Buffer for Time per pixel in this image. It is labeled mr_diagnostic_buffer_time (image)
Where shaders force more work from the sampler they can take longer to generate. This is multiplied by the number of samples that may be taken inside that sampler. In the old version where samples would jump large amounts, your time per pixel could be very expensive in leaps and bounds. Each value ‘S’ for a pixel is render time in seconds.
What if we decided to let Unified Sampling control the sampling. As an overall control for a frame, Unified Sampling can be done in a more “brute force” way. Lower the local samples on the shaders to 1. In this scenario you can strike a pixel maybe 400 times! But in that case the rays sent are only 400 rays. That’s less than the 1024 we might have seen before with just 16 samples! (This includes lights/shadows. For instance, I used 9 portal lights in a scene where I left their samples at ’1′, the resulting frame was still under an hour at 720 HD.)
Here’s the result. (image)
Something here is wrong.
Originally we were shooting more samples per eye ray. In some cases this may have been overkill. But now our image doesn’t look so great despite being faster (3 minutes is pretty fast). Think about it. If my reflection ray count was 64 then a pixel with 5 samples could spawn 320 rays. Well, my samples max of 100 is certainly lower than my 320 rays before (remember, I’m shooting 1 at a time now).
How do I fix this?
You can first increase your quality. 2.0, 3.0, more, etc. Keep an eye on your image as well as your Samples Diagnostic. We have found Samples Quality of 6.0 to 10.0 works in most cases. (This has been greatly reduced in mental ray 3.10, look here: Unified Sampling in 3.10 )
This is also where you will need to increase your samples max. Just like the scenario above where we might need 320+ rays, we need to raise the limit so Unified can make that decision.
But now you may notice something else. Areas without much contrast might gain samples for no visible reason. (Look at the black areas.) How do you fix that?
There is a rarely used control called Error Cutoff.
This control can be used to tell Unified Sampling to stop taking additional samples when the error calculation reaches a certain limit. Anything beneath this is no longer considered for additional samples. You may recognize this type of control from iRay where it has Error Threshold.
This control is very sensitive and I find that most tweaking happens in the hundredths of a measurement. So I begin with 0.01. In this example 0.03 was a good stopping point. 0.03 is triple the amount of 0.01 but overall just a tiny change in the control. So be careful when tuning this or you may erode Unified Sampling’s ability to sample areas that need it. In many cases it is an additional control and not a requirement, but its inclusion is important in difficult scenes.
Will this benefit motion blur and depth of field?
Yes, a lot in most cases.
Now you might be sampling hundreds of times per pixel. Add motion blur and/or depth of field and the effect is much less expensive now. Unified Sampling jitters these samples in time and space for these effects automatically.
Why is it less expensive?
The extra samples you’re already taking will take into account the temporal sampling of motion blur and the ray direction change (circle of confusion again) for depth of field. So achieving these effects is much less overhead here. You’re already sending lots of rays. All while maintaining QMC efficiency. Areas of blur in motion blur or DOF where a sample strikes a shader will also generate a single sample for each type of effect, lowering the cost of that sample on the edge of blurry detail.
So now you have an idea of how to use Unified Sampling based on visual examples. You should hopefully find that keeping your samples settings wide and using Quality will simplify your tuning and scene rendering as well as making it faster.
The below image is Motion Blur and Depth of Field. Samples Quality 8.0 with Samples Max 600 Rendertime: 44 minutes at 1280 by 720
- Using Progressive “on” and Unified Controls may help you resolve your settings faster but for now I find that I need to increase my Quality more than if I have Progressive “off” when using ’1′ local shader sample. But for look dev you can generate a single pass very quickly to check lighting, materials, etc. all at once. It’s been reported. But in the meantime your Progressive refinements will be freakishly fast! The above image would refine a pass every 9 seconds. So in about 18 seconds I could tell if something was wrong and change it.
- Using ’1′ local shader sample is more like a BSDF shader where it samples the scene in a single operation. Current shader models try to collect everything from their environment so one sample at a time is possible but not as good as BSDF.
- Combining features that are sampling your scene smartly will increase the speed of your render and result in higher quality renders. Unified Sampling is the basis that can be improved through BSDF shaders, Builtin-IBL, Importons, and other modern techniques that work together both present and future.
- What about lighting? Take a look here for some ideas on area lights and Brute Force: Area Lights 101
- Unified Sampling performance is Logarithmic like many brute force type of techniques. This means increases in Quality result in smaller and smaller render time increases as you get to higher numbers. Brute force rendering tests have shown a gain in speed for similar quality to be about 10-15%, we encourage more tests with this workflow. Others are testing this including studio projects where motion blur is key.
- Consider using your render output and mr_diagnostic_buffer_time to see areas in your image that might benefit from changes to get a faster render. (Visually insignificant areas that take too long due to expensive effects, lots of shadow rays, etc.) I find the biggest offender for render time are shadow rays in most cases.
Brute force wide gloss reflection, 2 bounces (glossy) with the Grace Cathedral HDR. 11 minutes a frame at Quality 12.
Below: Brute Force Portal lights and Ambient Occlusion. The frosted windows are the most difficult to resolve. 57 minutes for this frame (15 of that is indirect illumination calculation). Model can be found here: ronenbekerman Render Challenge One You can find recommendations on Area Lights for Brute Force here: Area Lights 101
Samples: actually still low, averaging less than 100 a pixel, many are in the teens.
For further information on [relevant] subject(s) please visit
(originally posted by http://www.elementalray.wordpress.com)
You can find the presentation here: nVidia Advanced Rendering and Ray tracing
You can also find a short blurb about coming features here: ARC Forum SIGGRAPH recap
Mentioned are features such as:
- Ambient Occlusion on the GPU with mental ray
- Layered shaders (layered components) that include the SSS shader without lightmapping. They have their own framebuffer mechanism with passes for added flexibility in OEM packages.
- Object lights with a new shader independent of OEM integrations
- Light Importance Sampling (IS) and Material Importance Sampling. This makes the usage of objects as area lights much easier and faster than before
- Multiple Motion Transforms (think curved paths and light streaks)
- Light Path Expressions (LPE) in iray
- Kepler support for iray 3 in mental ray
Test the fantastic new features in CINEMA 4D R14 for yourself.
To mark the start of SIGGRAPH 2012 in L.A., MAXON is announcing the immediate availability of the CINEMA 4D R14 demo version. Test incredible new features such as the completely integrated sculpting system, the camera matching functionality and new connectivity to third-party applications. Modeling, animation, rendering and workflow have also been made even better, which lets everyone from motion graphics designers to VFX artists, visualization professionals and more achieve dazzling results even faster and easier than before.
The CINEMA 4D R14 demo version can be activated with complete functionality* for 42 days, including the save function.
Download your CINEMA 4D R14 demo version today!*Limitations of the demo version:
- All save functionality is disabled (except when activated)
- Maximum rendering resolution of 640 x 400 pixels
- No network rendering
- Limited number of assets in demo libraries (libraries contain hundreds of objects in the full commercial version)
- The demo version is not upgradeable to a full commercial version
- The demo version is intended for evaluation purposes only. Any commercial use of the demo version, e.g. for training purposes, is not allowed.
Get software upgrades and up-to-date, specialized functionality as it is developed. You benefit from new capabilities, convenient implementation without disruption to ongoing projects, shorter learning time, and lower training costs. Don’t wait for annual upgrades—gain access to new time-saving features and creative functionality with Autodesk® Subscription.
Software extensions for Autodesk® Maya® 2013 and Autodesk® 3ds Max® 2013 products offer access to new and enhanced features that can help increase productivity and performance. Features vary by product, and can include:
- Enhanced creative toolsets for sophisticated particle simulation
- Next generation viewport display and shading
- Scene assembly tools for better complexity handling
Extensions for select 2013 Autodesk 3D software products are now available for download by Subscription customers. Take advantage of the latest tools and features designed to tackle the challenges you face today and can expect tomorrow.
Learn About 3D Software Extensions
Find your product below to learn more about Maya extensions and 3ds Max extensions.
|Autodesk® 3ds Max® 2013
|Autodesk® 3ds Max® Design 2013
|Autodesk® Maya® 2013
With this Ivy Generator software you can generate ivy leaves and branches around your model and then bring them right into your 3d application. I have tested it and it works quite perfect for Architectural Presenters.
You can download it from the link below:
With this script you select your texture and then material (no matter whether it is already connected or not) and then execure the script to add a gamma correct node added between your texture and materia. No matter how expensive is your shader, so safe is the material that it would add a gamma node without any damage to your existing connections.
Risk at your own use 😀
no Use at your own risk.
It is very useful for the project that you have many built networks, and you find the need of Gamma Node to go through all the existing connection, so it would be too hard to add a connection between many already made connections.
Look! first select the texture and then the material, ignore if there is any connection made to color/diffuse channel. all Maya materials as well as mia_material are supported.
(it is a personal folder on 4shared.com, therefore be sure there is no virus – still if you assume it as a risk, download it from creativecrash.com website.)