General Arnold rendering issues:
How can I do glossy reflections using the Arnold «Standard» shader?
In the specular section, increase the «Scale» parameter. The «Roughness» parameter will control how blurry your reflections are. The lower the value the sharper the reflection. In the limit, a value of 0 will give you perfectly sharp mirror reflection. You also have separate controls to adjust the intensity of the direct and indirect specular reflections. Direct specular are reflections coming from regular light sources (spot/point/area/distant lights), and indirect specular are reflections coming from another object or an environment map.
Fireflies, why do they appear and how can they be avoided?
Certain scenes/configurations suffer from a form of sampling noise commonly referred to as «spike noise», or «fireflies»: isolated, super bright pixels that jump around from frame to frame in an animation. This is especially noticeable in recursive glossy reflections involving both diffuse and sharp glossy surfaces. This noise is very difficult to remove by just increasing the number of samples in the renderer. There are several ways to fix the noise.
• Make the objects with the sharp glossy surfaces invisible to glossy rays. This can be done by disabling the glossy flag in the object’s visibility parameters
• Use a ray_switch shader in the the objects, with an appropiately modified shader in the «glossy» slot — for example, a shader returning black, or perhaps a shader with a bigger specular_roughness value.
Are caustics possible?
Given that Arnold uses uni-directional path tracing, «soft» caustics originating at glossy surfaces are perfectly possible, as well as caustics coming from big sources of indirect light. The caustics switches in the standard shader mean that you can tell the diffuse GI rays to «see» the mirror reflection, glossy reflection and refraction from the shader of the surfaces that are hit by them. By default only the direct and indirect diffuse rays are seen by GI rays. On the other hand, «hard» caustics emanating from spatially-small but bright direct light sources, for example caustics from a spotlight through a glass of cognac, are currently not possible. One possible workaround to render caustics would be to light the scene with light-emitting geometry where you set the values for emission really high (20-100) and play with the size of the emitter. However you would have to use really high sample settings to avoid grain. Other renderers more or less easily achieve hard caustics with the photon mapping technique. At Solid Angle we dislike photon mapping because it’s a biased, memory sucker technique that is prone to artifacts, blurring, obscure settings, doesn’t scale well with complex scenes and doesn’t work well with interactivity/IPR.
Bump mapping is not working when connecting images to a Bump3D node
Bump3D works by evaluating the bump shader (and image in this case) at different points (P + epsilon, P + epsilon, P + epsilon). Since the only thing displaced is the point, the uv coordinates will be the same in the different lookups. These will give the same texel in the image and result in no perturbation of the normal. You should use Bump2D for images.
How to get rid of noise
Computationally, the efficient way to get rid of noise is to go from the bottom up (going from the particular to the general). We would increase the sampling of individual lights first, then the GI and glossy samples. Finally, the AA samples, which acts as a global multiplier of the whole sampling process. However, the only way to improve the quality and reduce the noise of motion blur and depth of field is to increase the AA samples. In this case, this AA increase allows you to decrease the other sampling rates (GI/glossy/light) to compensate. To sum up, you will get almost the same render time with AA=1 and GI=9 than with AA=9 and GI=1, but at AA=9 you will have much better motion blur and depth of field quality.
How to reduce flickering with Pointcloud SSS
Regarding SSS noise and flickering, there are two ways to reduce it:
- Increasing the pointcloud density of a polymesh with sss_sample_spacing.
- Leaving the number of points the same, but increasing the quality of the irradiance evaluation at each point.
This is highly scene dependent but, generally speaking, you want to use as few points as possible, but increase their quality with sss_sample_factor. The sampling rates used to calculate SSS irradiance are multiplied by sss_sample_factor. So while an AA=8 and GI=2 gives you excellent diffuse sampling in a lambert shader, a GI=2 by itself gives you very noisy SSS. In this case, you need sss_sample_factor=8 to get comparable results in SSS shading. This will ensure that the SSS points «see» a clean solution and you are not compensating the SSS noise by increasing the number of points as suggested in (1) above, which is much more expensive.
In addition, you can select the pointcloud distribution. The default is a blue_noise, random-looking distribution, which is very high quality but can flicker if the mesh is deforming. If your object has been exported with Pref coordinates, then you can use the blue_noise_Pref pointcloud distribution which is «locked» to a reference pose and therefore does not flicker even if the mesh is deforming. There complete list of available pointcloud distributions is:
$ kick -info polymesh.sss_sample_distribution
node: polymesh
param: sss_sample_distribution
type: ENUM
default: blue_noise
enum values: blue_noise blue_noise_Pref triangle_midpoint polygon_midpoint
What does `min_pixel_width` do for curves?
min_pixel_width is a threshold parameter that limits how thin the curves shapes can become with respect to the pixel size. So if you were to set min_pixel_width to a value of 1 pixel, no matter how thin or far away from the camera that the curves are they will be «thickened» enough so that they appear 1 pixel wide on screen. Wider strands are easier to sample so they will tend to show much less aliasing artifacts.
The problem with simply thickening curves like this for is that it will start to take on a wiry or straw-like appearance because it would be much wider and blocking much more of the background than they should be. A well established method of giving a softer look to thick strands is to map their opacity along their length, the internal user data «geo_opacity» is an automatic way to do this.
The value of the «geo_opacity» UData field for strands that are already thicker on screen than the min_pixel_width threshold will always be 1.0, but strands that had to be thickened to meet the min_pixel_width threshold will get lower geo_opacityvalues in proportion to the amount of thickening. If a shader reads this value and uses it to scale its out_opacity result, then the thickening of the curves will be properly compensated and thin curves will retain their soft appearance. The result is not exactly the same as the curves without thickening, but on average the appearance is very similar and can be much easier for the raytracer to sample.
In practice a min_pixel_width setting of 1.0 is probably too high, making the strands look too soft and the difference between using and not using the technique quite noticeable. You will usually get better results with values in the 0.1-0.5 pixel range. Also, the higher the min_pixel_width, the slower to render, because the renderer will make the hairs more transparent to compensate for their increased thickness. For example, a value of 0.25 means that the hair geometry will never be bigger than 1/4 the size of the pixel, so you can get good antialiasing with AA=4 samples. A value of 0.125 (or 1/8) will need at least AA=8 to get good antialiasing etc. ext here goes in a table, but you can edit directly (check text is in table).
What does autobump do for polymeshes?
When autobump is enabled, Arnold makes a copy of all of the vertices of a mesh prior to displacement (let’s call that the «reference» mesh, or Pref). Prior to shading at some surface point on the displaced surface P, the equivalent Pref for that point is found on the non-displaced surface and the displacement shader is evaluated there (at Pref) to estimate what would be the equivalent normal at P if we had subdivided the polymesh at an insanely high tessellation rate.
The main difference between Arnold’s autobump and using the displacement shader for bumpmapping (say with the bump2d node) is that autobump has access to Pref whereas bump2d does not and would be executing the displacement shader on already-displaced points which could «compound» the displacement amounts (if that makes any sense).
The only extra storage is for copying P prior to displacement. There is no analysis of the displacement map; Arnold displaces vertices purely based on where they «land» in the displacement map (or procedural) regardless of whether it happens to «hit» a high-frequency spike or not..
Autobump doesn’t work
The autobump algorithm needs UV coordinates to compute surface tangents. Make sure your polymesh has a UV set applied.
How is transparency handled?
Arnold has two different ways of calculating transparency, refraction and opacity. They are different ray types and thus have different controls in the Ai Standard shader as well as in the render options. You must disable ‘opaque’ for the mesh that has been assigned the Ai Standard material.
How do I work in «Linear» colorspace with Arnold?
Set texture_gamma, light_gamma and shader_gamma to the value of the gamma you want to correct (usually 2.2).
What are .tx files?
.tx are just tiled+mipmapped tiff files.
The ‘maketx’ utility part of OpenImageIO (or the ‘txmake’ utility shipped with RenderMan) can convert any image file into a TX file. It gets slightly more confusing because it is also common to rename .exr files to .tx, since OpenEXR also supports tiling and mipmapping (which arnold supports).
The standard libtiff library is all that’s necessary to read tiled+mipmapped TIFF, through the use of the appropiate TIFF tags and library calls.
What makes the difference and the main reason to introduce this step into the pipeline is tha tiled mipmapped images are much more efficiente and cache friendly to the image library OIIO.
How are the settings for textures related? Why .tx files?
The texture cache — recommended settings.
•Recommended settings for maketx and which ones are important.
•Windows batch scripts: If you copy one of these scripts to a .bat file (simple textfile with .bat as file extension) in the same folder as maketx.exe and drag a link to it to your desktop, you can throw multiple texture images on it at once and they will get converted in one go. If you don’t want the verbose output, remove the «-v», but it may help you understand what maketx does. If you don’t want the window to stay visible at the end, remove the «pause».
Default settings (mipmap and resizing):
@for %%i in (%*) do maketx.exe -v %%i
pause
No mipmapping, but automatic resizing:
@for %%i in (%*) do maketx.exe -v --nomipmap %%i
pause
No resizing and no mipmapping:
@for %%i in (%*) do maketx.exe -v --nomipmap --noresize %%i
pause
How do I adjust the falloff of an Arnold light?
You must attach a «filter» to the light (unless you just want to turn the decay off completely). There are several filters for different needs (e.g. gobo, barndoors etc). To get a normal decay behavior, you don’t need to do anything, since the light node will default to real-world inverse square (i.e., quadratic) decay. If you want further control you will want to use the light decay filter. Note that the type of decay itself (real-world quadratic fall-off, or no fall off) is specified on the light node itself, the light decay filter is to provide further control via attenuation ranges (altering the range over which the decay occurs).
Why does the standard shader compute reflections twice?
The standard shader can compute both sharp and glossy reflections.
Sharp reflections are controled by the Kr and Kr_color parameters. This type of reflection uses one ray per shader evaluation, with a depth limited by the GI_reflection_depth global option.
Glossy reflections are controled by the Ks, Ks_color, specular_brdf, specular_roughness and specular_anisotropy parameters. This type of reflection uses GI_glossy_samples to determine how many rays to trace per shader evaluation, and the depth is limited by the GI_glossy_depth global option. Note that using a specular_roughness of 0 will also give you a sharp reflection, but doing this is slower than using the pure mirror reflection code.
These two types of reflection can coexist and have independent Fresnel controls. They are simply added together with the other components of the standard shader, without taking into account any energy conservation. We will probably end up unifying both types of reflections in the future.
What is the re-parameterization of specular_roughness in the standard shader, and why is it non-linear?
Following advice of testing from artists, it was found they didn’t like the linear mapping of the radius of the specular highlight. They instead preferred a mapping that is slightly curved, with an equation of type 1/r^3. This 1/r^3 mapping resulted difficult in the Cook-Torrance and Ward-Duer BRDF cases (requiring expensive powf() calls), and caused the specular_roughness to lose some of the «physical» sense of being proportional to the specular highlight’s radius. We therefore opted to square the current roughness parameter. This results in a similar result to the 1/r^3 while maintaining some of the «physical» sense of the parameter (instead of doubling the radius each time you double the roughness, you end up quadrupling the radius).
Can the diffuse_cache in the hair shader produce artifacts?
The caching of illumination happens right at the hair control points, so yes, you may get artifacts in certain situations:
- If the first control point of the hair is just below the scalp, the point will be occluded from all lights and environment illumination, and this occlusion will «leak» into the second control point which is above the scalp. So you’d get some shadowing in the roots.
- If the number of control points is animated, the illumination can flicker.
- The cached illumination is not motion blurred.
In any case, real hair does not have a diffuse component, this is just a hack to get some sort of bouncing in the hair. The cache also uses memory and may not reach 100% thread scalability (as multiple threads may need to write into the cache at the same time). The plan is to deprecate the hair diffuse cache, as soon as we have a more physically based model for hair with multiple scattering etc.
How do you capture a 360-degree snapshot from a scene in Latitude-Longitude panoramic format?
Lat/long maps can be rendered with Arnold’s cylindrical camera. There is an example in the SItoA trac#638. For more information, from the command-line, type kick -info cyl_camera. Make sure that the horizontal fov is set to 360, the vertical fov is set to 180, and the camera aspect ratio is set to 1.
How does bucket size affect performance?
To simplify filtering, Arnold adds a bit of «edge slack» to each bucket in each dimension. The amount of «edge slack» is exactly 2 * pixel filter width (unless you are using a box-1 filter, which has no slack at all). If the user sets the bucket size to 32×32 with a filter width of 2, the buckets are internally extended to 36×36. This is so that each bucket has enough samples in the edges to perform pixel filtering independently of each other bucket, as inter-bucket communication would greatly complicate multithreading.
Here is an example showing the number of camera rays as the filter width is increased:
- 1024×778, AA samples = 1, filter width = 1, traces a total of 796672 camera rays
- 1024×778, AA samples = 1, filter width = 2, traces a total of 900864 camera rays
The corollary is that you should not use buckets that are too small, as the percentage of «redundant» pixels grows inversely proportional to bucket size. The default 64×64 is a good base setting, but 128×128 should be slightly faster.
Do any of the light parameters support shader networks?
At the time of writing (Arnold 4.0.12.0) shader networks are only supported in the color parameter of the quad_light and skydome_light nodes, and in the light filters (the filters parameter) of all the lights.
How does the exposure parameter work in the lights?
In Arnold, the total intensity of the light is computed with the following formula: color * intensity * 2exposure. You can get exactly the same output by modifying either the intensity or the exposure. For example, intensity=1, exposure=4 is the same as intensity=16, exposure=0. Increasing the exposure by 1 results in double the amount of light.
The reasoning behind this apparent redundancy is that, for some people, f-stops are a much more intuitive way of describing light brightness than raw intensity values, especially when you’re directly matching values to a plate. You may be asked by the director of photography — who is used to working with camera f-stop values — to increase or decrease a certain light by «one stop». Other than that, this light parameter has nothing to do with a real camera’s f-stop control. Also, working with exposure means you won’t have to type in huge values like 10.000 in the intensity input if your lights have quadratic falloff (which they should).
If you are not used to working with exposure in the lights, you can simply leave the exposure parameter at its default value of 0 (since 20 = 1, the formula then simplifies to: color * intensity * 1).
How do the various subdiv_adaptive_metric modes work?
- edge_length: patches will be subdivided until the max length of the edge is below subdiv_pixel_error (regardless of curvature).
- flatness: patches will be subdivided until the distance to the limit surface is below subdiv_pixel_error (regardless of patch/edge size). This usually generates fewer polygons than edge_length and is therefore recommended.
- auto: uses the flatness mode unless there is a displacement shader attached to the mesh, in which case it uses theedge_length mode. The rationale here is that if you are going to displace the mesh you probably don’t want the subdiv engine to leave big polygons in flat areas that will then miss the displacement (which happens at post-subdivided vertices).
Should I use a background sky shader or a skydome_light?
The skydome_light will most of the time be more efficient, in both speed and noise, than hitting the sky shader with GI rays. The only situation where using a sky shader may be faster than the skydome_light is when the environment texture is a constant color or has very low variance. There are various reasons why using skydome_light is more efficient:
- The skydome_light uses importance sampling to fire rays to bright spots in the environment, therefore automatically achieving both soft and sharp shadows; sampling the sky shader with GI rays cannot achieve hard shadows in reasonable times, you will need huge amounts of GI samples.
- The environment map lookups for the skydome_light are cached rather than evaluated at render time. Since texture lookups via OIIO are very slow, this caching results in a nice speedup, usually 2-3x faster than uncached (if you are curious, you can switch to uncached lookups by setting options.enable_fast_importance_tables = false and measure the difference yourself).
- The skydome_light is sampled with shadow rays, which can be faster than GI rays because shadow rays only need to know that any hit blocks the light (rather than the first hit). This also means the sampling quality for the skydome_light is controlled via skydome_light.samples, whereas the quality for a background sky is controlled via theGI_{diffuse|glossy}_samples. This subtle distinction is very important: skydome_light is direct lighting and sampled with shadow rays, whereas the background sky shader is indirect lighting and therefore sampled with GI rays.
How does the ignore_list parameter from the options node work?
It tells the renderer to ignore nodes filtered by type. The following example will ignore, at scene creation, all of the occurrences of lambert, standard and curves nodes:
options
{
...
ignore_list 3 1 STRING lambert standard curves
}
Which coordinate space does Arnold assume for a vector displacement?
The displacement shader should output a vector in world space.
How do the different subdiv_uv_smoothing modes work?
The subdiv_uv_smoothing
setting is used to decide which sub-set of UV vertices on the control mesh get the Catmull-Clark smoothing algorithm applied to them. Those vertices that do not get smoothing applied to them are considered to be «pinned», since the apparent effect is that their UV coordinates are exactly the same as the corresponding coordinates on the control mesh. The subdiv_uv_smoothing
modes work as follows:
smooth
(none pinned): In this mode Catmull-Clark smoothing is applied to all vertices of the UV mesh, indiscriminately. This mode is really only useful for organic shapes with hidden UV seams whose coordinates do not have to precisely follow any particular geometric edges.- pin_corners: Catmull-Clark smoothing is applied to all vertices of the UV mesh except for those that are connected to only two edges in UV coordinate space (valence = 2). This mode is the default in Arnold (for legacy reasons), however pin_borders is probably a more useful default setting in practice.
pin_borders
: Catmull-Clark smoothing is applied to all vertices of the UV mesh except for those that are connected to an edge that forms part of only one polygon. This mode is possibly the most useful of the four and we would suggest trying this one first. In this mode it is guaranteed that the UV coordinate seams on the subdivided mesh will exactly match the corresponding edges of the control mesh, making it much easier to place textures at these seams while still applying Catmull-Clark smoothing on all interior vertices.linear
(all pinned): Catmull-Clark smoothing is not applied to any vertex. This mode can prove useful when rendering objects with textures generated directly on a subdivided surface, like ZBrush models exported in «linear» mode.
The different modes are illustrated in the following image:
How are the polymesh normals computed when subdivision is enabled?
Vertex normals will be computed using the limit subdivision surface, overriding any explicit values set in the nlistparameter. The vertex normals specified in nlist are only used for non-subdivided meshes.
What is the Z value for rays that do not hit any object?
The Z value for rays that do not hit any object is controlled by the far_clip parameter on the camera. By default this camera parameter has a value of 1.0e30f (AI_INFINITE).
Why does iterating through an AiSampler after the first split in the ray tree always return one sample?
Splits are typically caused by the BRDF integrator methods, which can fire GI_diffuse_samples^2
or GI_glossy_samples^2
rays. In this situation there would be a combinatorial explosion in the number of rays as the ray depths were increased. Therefore, Arnold «splits» the integrals into multiple rays only once, at the first hit. After that first split, we only follow one single ray, in the spirit of pure path tracing. This may seem like it introduces severe noise, as intuitively the higher bounces seem undersampled compared to the first bounce. But in fact what we are doing is concentrating most of the rays where they matter the most.
Since the AiSampler
API was added for users writing their own custom BRDFs, we decided it was best to automatically handle splitting versus no splitting at the renderer level. Thus the sample count is automatically reduced after the first split, which affects the sample counts for both our internal integrators as well as any custom integrators that use AiSampler.
How does the time_samples parameter from the cameras work?
time_samples is used to remap Arnold’s «absolute time» to the «relative time» of motion keys, which are specified as values. If the keys are evenly spaced between 0 and 1, time_samples is not needed. But if they are not you can remap time specifying the time_samples array.
The syntax is the following:
time_samples
2 1 FLOAT 0 1
Which means an array of 2 elements with 1 key (you can’t motion blur time_samples
).
As an example, let’s say there is a camera data with subframe matrix keys (M1, M2, M3) at times (0, 0.25, 1.0).
By default Arnold will assume those keys correspond to (0.0, 0.5, 1.0) and when absolute time t is equal to 0.5 Arnold will use M2.
If you set time_samples
to (0, 0.5, 0.66, 0.83, 1.0) you define a linear piece-wise function F that will map 0 to 0, 1 to 1, 0.25 to 0.5, 0.5 to 0.66, etc. You can use as many linear segments as needed to make the remap smoother.
When absolute time t is equal to 0.25 Arnold will use M2, because the remap function F(0.25) = 0.5, and 0.5 in «relative» time corresponds to matrix M2.
Questions about the API and writing shaders:
What is the difference between AiNodeGet* and AiShaderEvalParam*?
The main difference between evaluating a shader parameter via AiShaderEvalParam* and evaluating via AiNodeGet* is that the former supports shading networks, whereas AiNodeGet* just returns the «static» parameter value without evaluating whatever other shader may be plugged in there. The other difference is that you can only callAiShaderEvalParam* from within a shader’s shader_evaluate method, whereas you can call AiNodeGet* anywhere.
Also, calls to AiShaderEvalParam* have much lower overhead than equivalent calls to AiNodeGet* because:
- AiNodeGet* operates on strings which needs slow string comparisons, hashing etc.
- AiShaderEvalParam* is optimized to work on shader nodes, whereas AiNodeGet* can operate on any node types and thus requires more internal checks and book-keeping.
How do I retrieve user-data from a shader?
In shader_evaluate, use the AiUDataGet* API, regardless of the the storage qualifier of the user-data (constant, uniform or varying). Note that, for constant user-data, it is also possible (but very slow) to retrieve user-data with AiNodeGet*. For performance reasons, you should never call AiNodeGet* from a shader’s shader_evaluate method. It’s OK to callAiNodeGet* from node_initialize and node_update, because these methods are only called once.
What is the difference between AiNodeGet* and AiUDataGet*?
The AiNodeSet/Get*
API was designed for scene construction and manipulation, whereas the AiUDataGet*
API was designed to be called at shading time.
Are shader parameter evaluations cached with AiShaderEvalParam*?
Arnold does not cache any parameter evaluations (though this is scheduled for a future release). Evaluating the same parameter twice will evaluate the whole shading network twice, so care must be taken in shaders to avoid redundant evaluations.
Can I use the AiNodeGetLink method inside a shader_evaluate?
Technically, yes, you can call AiNodeGetLink(), AiNodeGetFlt() and similar APIs in shader_evaluate. However, it is not recomended, because these calls are slow, involving several string comparisons, hash lookups and other potentially expensive operations that can severely affect render times. The solution is to precompute these calls in thenode_initialize or node_update callbacks, which are executed once per session or per IPR pass, respectively, rather than per shading sample.
Can I use the AiNodeGetLink method inside a shader_evaluate?
Technically, yes, you can call AiNodeGetLink(), AiNodeGetFlt() and similar APIs in shader_evaluate. However, it is not recomended, because these calls are slow, involving several string comparisons, hash lookups and other potentially expensive operations that can severely affect render times. The solution is to precompute these calls in thenode_initialize or node_update callbacks, which are executed once per session or per IPR pass, respectively, rather than per shading sample.
Using Anisotropic speculars
You need to pass tangent (u,v) vectors to account for anisotropy effects in several BRDF functions (e.g. WardDuer). But these BRDF’s are extremely picky about the N, u and v vectors being orthonormal. You could try to use sg->dPdu and sg->dPdv to build the tangent vectors. But there is no guarantee that they will be at a right angle from each other. In fact, the only time dPdu and dPdv are at a right angle with Nf is when Nf == Ngf. Secondly, the sg->dPdu and sg->dPdv vectors are constant values per face. This means that you will almost certainly see the edges between faces when using these vectors as a base for anisotropic shaders, making this solution unusable unless the faces are very small.
If anisotropy is important to you, the most common way of generating an orthonormal basis for these functions is to define a «polar» vector in your shader (for example AI_V3_Z) and deriving a u and v vector from sg->Nf and the polar vector like so:
u = AiV3Cross(polar_vector, sg->Nf);
v = AiV3Cross(sg->Nf, u);
u = AiV3Normalize(u);
v = AiV3Normalize(v);
The above technique combined with a «rotation» map is usually enough to get anisotropy oriented any way you want.
The polar solution does have its drawbacks however. You will need special treatment for faces that are oriented exactly towards the pole (the cross product gives you null vectors as a result there) and the anisotropy does not «stick» to deformed meshes. To get the tangents to «stick» to deformed meshes you could use textures to define them (similar to the normal mapping technique) or pass tangents in as varying per-vertex data that your shader reads. The problem in this case is that currently we only support one value per vertex for the tangents, so complex uv layouts with islands will give problems.
How do the visibility and the sidedness params of a node work?
They are masks defined with the following values:
By default both visibility and sideness are set to 65535 (0xFFFF). So, if you want a polymesh to be visible only for camera rays, its visibility value would be AI_RAY_CAMERA. And if you want it to be visible to Camera and Reflection you would doAI_RAY_CAMERA + AI_RAY_REFLECTED. |
---|
What values does the disp_map polymesh parameter support?
If the shader connected to the parameter outputs a float value, then displacement happens along the normal to the surface. But if it outputs a 3D vector, that will be the displacement vector itself, in object-coordinates
There is no native support for tangent-space displacement vectors in Arnold. However, you should be able to implement a displacement shader that interprets incoming tangent-space vectors and transforms them on-the-fly to the coordinate space defined by (N, dPdu, dPdv).
How can I attach user data to a ray that can be queried in subsequent ray hits?
You can attach arbitrary data to the shading state that’s passed down the ray tree using the message passing mechanism. For example:
To set a value:
float ior = 1.2f;
AiStateSetMsgFlt("ior", ior);
...
To get a value:
float ior;
if (AiStateGetMsgFlt("ior", &ior))
{
...
}
How does the alpha output of a shader connected to options.background affect the beauty pass?
The alpha component of the beauty pass represents the cumulative opacity of a pixel, which is determined by the out_opacityshader global and not the «A» component of an RGBA shader’s result.
How to fill the different curves parameters?
The number of varying values for a given curve is based on the following formulae:
- num_segments = (num_points — 4)/vstep + 1
- num_varying = num_segments + 1
Step size for each of the curve bases:
- bezier: 3
- b-spline: 1
- catmull-rom: 1
- hermite: 2
- power: 4
Essentially, there is one varying value at the start of each segment, with one extra value at the end of the curve. This is so that the varying values are interpolated smoothly from the start to the end of each, being continuous with the next and previous segments. Each curve basis defines its number of segments differently based on the total number of control points, hence the differing step sizes above.
A couple examples:
- You have a bezier-basis curve you want to specify with 13 control points. This means there are (13-4)/3+1 = 4 bezier segments, and so you need 5 radii.
- You have a B-spline or Catmull-Rom curve you want to specify with 14 control points (something you can’t do with beziers, which always have the number of control points be a multiple of 3 + 1). This means there are (14-4)/1+1 = 11 segments, so you need 12 radii.
A polymesh node in an ass file has no nsides array defined. Does Arnold default to any value?
If nsides is not defined Arnold considers the polymesh node to be a triangular mesh.
How to declare user-defined parameters in an .ass file (AiNodeDeclare)
To add a user-defined parameter in an .ass file, you need to first declare it like AiNodeDeclare() does in the API. This can be done for all node types and is pretty useful when creating a procedural DSO that reads global information from the options block or the current camera. It’s similar to the Renderman RiAttribute and RiOptions calls, the only thing is since Arnold isn’t a state machine you might have to duplicate data on your procedural.
The declare line takes similar parameters to AiNodeDeclare(node, name, declaration):
- node, is implicit and maps to the currently declared node.
- name, is the user-parameter name as first argument
- declaration, is a class and a type.
- class := { constant, uniform, varying }
- type := { BYTE, INT, BOOL, FLOAT, RGB, POINT, STRING, etc. }
After declaration, the parameter can be set as if it was a normal parameter.
options { ... declare my_user_param constant STRING my_user_param "I want to render Rainbows with Unicorns." ... }
Do searchpaths support environment variable expansion?
The texture_searchpath, procedural_searchpath and shader_searchpath options support expansion of environment variables delimited by square brackets. This works both using the API and inside .ass files as shown below:
AtNode *options = AiUniverseGetOptions(); ... AiNodeSetStr(options, "procedural_searchpath", "[MY_ENVAR_PATH]/to/somewhere");
options { AA_samples 4 GI_diffuse_samples 4 ... procedural_searchpath "[MY_ENVAR_PATH]/to/somewhere" }
Is there a way to view the output image of a render in progress?
There is a workaround to see the progress of a render outside a general display driver.
- Set your output to be tiled exr, zip is fine.
- Set the bucket scanning method to ‘top’.
- Use a viewer like imf_disp.
What parameters support motion blur keys?
Geometry:
- For all shape nodes: matrix.
- For the polymesh node: vlist and nlist.
- For the curves node: control points, radius and orientations.
- For the points node: points, radius, aspect, rotation.
Cameras:
- matrix and associated positional/orientation attributes like position, look_at, up.
Lights:
- matrix and associated positional attributes: position, look_at, up, direction.
Shaders:
- Nothing is interpolated in the built-in Arnold shaders, with the exception of gobo.rotate
Is there any difference between having subdiv_type = none and subdiv_type = linear when subdiv_iterations is set to zero in a polymesh?
On polymesh nodes with subdiv_type set to linear and subdiv_iterations set to zero, the polymesh will ignore its provided set of user-defined shading normals and will issue a warning if they exist. On polymesh nodes with subdiv_type set to none andsubdiv_iterations set to zero, normals defined by the user will not be ignored. This means the output result could change quite a bit if the user is ever providing normals for the polymesh it generates.
So the result could change quite a bit if katana is ever providing normals for the polymesh it generates.
How is the Z-depth AOV stored?
The Z AOV will have non-normalized depth data in the alpha channel between the near and far camera clipping planes, using the plane distance by default.
Does min_pixel_width apply to all rays?
AtRGB opacity = AiShaderEvalParamRGB(p_opacity);
// this piece of user-data is automatically set by the curves node when
// using auto-enlargement (min_pixel_width > 0)
float geo_opacity;
if (AiUDataGetFlt(«geo_opacity», &geo_opacity))
opacity *= geo_opacity;
if (AiShaderGlobalsApplyOpacity(sg, opacity))
return;
// early out for shadow rays and totally transparent objects
if ((sg->Rt & AI_RAY_SHADOW) || AiColorIsZero(sg->out_opacity))
return;
There will be some performance hit from dealing with the opaque hairs during shadow ray casting. However, I’ll bet that most of that time is made up in reduction of noise (via a reduction in aliasing from the min_pixel_width optimization itself). Your mileage may vary, so please let us know if you run into any real trouble here.