If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
New! You can now log in to the forums with your chaos.com account as well as your forum account.
How about setting up a 10 minute survey with multiple choice or numerical questions like "what is your industry?" "which form of antialiasing do you use the most?" "what percentage of your projects involves rendering animations as opposted to stills?" "do you frequently run out of RAM?" and so forth. You could probably get 50-100 responses at least from the people on this forum... probably many more. I know that's not nearly as good as UI statistics -- but it's better than nothing.
Absolutely It's one of the things that I plan to do, yes.
Hi all. Very nice thread. I learned a lot from it.
I have a question and can´t find the proper answer searching the internet.
Wen i use "fixed" image sampler, if i use 1 subdiv, one sample in the center of each pixel is taken. If i use subdiv 2, then 4 samples are taken, but, 1 centered and 3 randomized using MC algorithm? or all 4 randomized?
I know the question may be irrelevant, i´m just curious about it.
Things are exacerbated by the fact that V-Ray is used for so many different things right now across many industries, each of them having their own slightly specific needs.
One option to solve this problem could be to split vray in 2. Like photopshop and lightroom sort of, or windows and windows server, or 3ds max an 3ds max design (ok, forget about this last one).
One option to solve this problem could be to split vray in 2. Like photopshop and lightroom sort of, or windows and windows server, or 3ds max an 3ds max design (ok, forget about this last one).
Well, the UI HAS gotten a lot of attention for 3.0, and it keeps getting hammered at, even if you can't see the results yet.
However, i am NOT fond of user-based choices, however one chooses to poll those.
There is such a thing as a logical approach (and depending on markets, more than one, of course), and i strongly feel it is the developer's duty (aided by selected, proven professionals, which would approach the issue from a logical standpoint, providing clear reasoning for each suggestion) to make a clear choice on how to organize the UI components to extract the best from his software.
Take ANY competitor of VRay, and all of them have the exact same issues: either too few options leading to insane rendertimes, or overly cluttered UIs the user has to get used to.
Take the vrayMtl for example: the user should be concerned with the look development, with an UI which shows clearly the parts affecting that, with all and any of the speed/quality tweaks left for hidden until the user makes a conscious decision to touch them, with full knowledge that the part being tweaked is potentially risky business.
The same goes for other parts, like the render settings.
I found it a TERRIBLE choice to have the UI default to Reinhard color mapping (albeit it's still linear in its default settings), PRECISELY because the dropdown does not say "Linear".
It's plain WRONG in this day and age, to not be under LWF, and while one CAN make the choice one wishes for, the software should never ENCOURAGE making the wrong ones.
For the thing that comes right after wrong color mapping settings is images and render elements (yes. i HAVE seen this.) saved out in 8bpc.
Yes, the highlights aren't that burned, and yadda yadda, but it's still wrong to not do post on one's LINEAR renders, and it's a concept we're dragging on from days long gone by, when we all had a tenth of the technology currently available.
Rather than bending to the will of this or that market, the dev should choose what is right (by maths and logic, not opinion!) and present that as a default.
It's the user's task to go and LEARN why that is the way it is, and then step up to the game he's into playing for a living (me, you, anybody.).
The list of oddities is indeed quite long, and some of those are bad, bad conundrums to get out of, there's no mistake about that.
However, i feel getting out from under the growing stack of cards of piled-up compatibilities and nods to old, wrong workflows, is much better done sooner rather than later, and that's precisely the direction 3.0 has taken.
Most of the effort was early on, agreeably, as the major new release was a big chance to implement those, but work is still ongoing, and i am ever so glad Vlado is keeping that alive.
Maybe a more aggressively edited UI could be published in selected betas, along with a description of the logic behind each choice, so to have us users somehow start afresh with our understanding of the various components and their associated UIs.
But that'd be a lot of work, potentially, and surely something i wouldn't want on the already well burdened Vlado (I love his work just fine when he goes away for a week and comes back with a BF GI that's 2.5X faster than a week before...).
Hi.
First of all i want to thank Vlado for your reply and ^Lele^ for the Fixed/MDR tip. I´m testing some interior scenes, and sometimes, fixed 4 and MSR 256 without any adaptive DMC give me the fastest results i ever seen using BF and very clean results. Better than using other methods discussed in this post.
Sorry about my bad English i will try to explain myself.
The problem i have now is: For example, if i setup my MSR without any adaptive amount in order to get the worst parts of my scene clean and get the exact number of samples (just for test purposes), for example i get perfect results using 4096/4096 samples per pixel, now y repeat the process for the areas of my scene whit less noise and i get clean results using 512/512 samples per pixel. So, the range of samples needed to get clean results everywhere is 512/4096. In my mind, if i tweak now adaptive amount in order to get exactly 512/4096 samples per pixel and use a low enough noise threshold i spect to get almost the same noise but in less time (because before i was oversampling some areas), but even if i use 0.001 noise threshold i always get worst results (much more noise) so in order to get the same level of noise that i was getting without adaptive amount i must raise a lot the samples per pixel and the render time goes worst. Is there a way to create very specific boundaries for Vray to use adaptive DMC effectively?.
Yes. I start using 0.0 just to calculate the exact number of secondary samples in some specific area, to establish the lower/upper limits i want. And then i enable adaptive, something like 0.7 for example. Using adaptive the render is faster but looks like Vray never use the maximum number of secondary samples. I´m testing now using even lower noise threshold values (0.0007) and looks like i can gain some render time and keep the same noise level.
PD the scene is an interior whit almost even GI everywhere.
That's right: keep lowering it, until those areas get cleaner.
You will only force more specialised rays, anyways, UP to your maximum set limit.
It should be faster in the cleaner areas, and just slightly slower in the more sampled ones.
Going through the adaptivity logic isn't for free (as mentioned in a post above) so your mileage may vary.
In fact, you MAY get, in some shots, better results sampling evenly across the image plane.
*)Set your AA to the minimum EDGE quality you need (fixed-4, for example, or 16 rays per pixel to find objects in the scene, and proceed to anti-alias their edges).
Hey Lele,
I try to apply your recommendation but I don't understand your first line.
Do you mean that I have to set :
* the image sampler, Type : Fixed at 4?
* the image sampler, Type : adaptive 1/16?
I want to share my approach to get your feedback in case i´m doing something crazy. It´s working great for my in all kind of situations.
All material, lights, etc set as default 8 subdiv.
"Divide shading subdiv" active
1. First i setup AA using Fixed, focus on the most difficult part of the image and raise AA until i get clean edges. (Ex. AA=4 but vary in each scene)
2. Disable Adaptive DMC (just for the initial calculations, in order to get exact values, i will change it later). Setup Adaptive amount = 0
2. Locate the noisiest part of the image (no mater were the noise comes from). Render region it and raise "Min Shading rate" until i get the level of noise i want (You usually need to use big numbers here 128-256 works for me most of the time). (Ex. You get clean noise using 256, now, in the "brute force" tab, "Per Pixel" secondary samples, in this example 4096/4096, take note of this number.
3. Locate the part of the image whit less noise, and lower the "min shading rate" as much as i can before noise appear on this region (For example: 32). in the "brute force" tab you can take note of the new per pixel information. (using 32 should be 512/512).
4. So now i know the minimum number of secondary samples your scene need (512) and the maximum (4096) in order to get the level of noise i choose, every single secondary sample above 4096 will be a waste of time.
5.In order to let Vray move between this 2 values, use MSR=256 and raise the Adaptive amount until the "Brute force" "Per Pixel" tab says (512/4096). (In this example 0.75).
6.Focus again in the noisiest region of the scene and lower "noise threshold" until its clean (you need to use very low values in order to force Vray use the 4096 samples, but in the worst case scenario the max secondary samples still very low(4096)), or you can raise a little MSR to give Vray more space.
The beauty of this approach is that you don´t need to focus on where the noise comes from, because you are changing all the secondary subdivs at once using MSR and giving Vray enough space to use adaptive. The problem i have using Adaptive image sampler is the incredible big margins of adapivity i get, and the inevitable oversampling of the heavy AA zones of the image plus the tedious process of optimize every material/light individually. And in my personal tests i get better render times.
I tried that method, but somehow it the numbers don't really work. I set my AA subdivs for the Fixed AA to 8 (because of strong DOF), the MSR for my most noisy areas to 32, for the least noisy areas to 4. Whatever I type in for the adaptive amount afterwards, I can not get it to the correct numbers of 128min and 2048max. I can only get it down to 1024/2048, which is still too much. What am I doing wrong?
I want to share my approach to get your feedback in case i´m doing something crazy. It´s working great for my in all kind of situations.
All material, lights, etc set as default 8 subdiv.
"Divide shading subdiv" active
1. First i setup AA using Fixed, focus on the most difficult part of the image and raise AA until i get clean edges. (Ex. AA=4 but vary in each scene)
2. Disable Adaptive DMC (just for the initial calculations, in order to get exact values, i will change it later). Setup Adaptive amount = 0
2. Locate the noisiest part of the image (no mater were the noise comes from). Render region it and raise "Min Shading rate" until i get the level of noise i want (You usually need to use big numbers here 128-256 works for me most of the time). (Ex. You get clean noise using 256, now, in the "brute force" tab, "Per Pixel" secondary samples, in this example 4096/4096, take note of this number.
3. Locate the part of the image whit less noise, and lower the "min shading rate" as much as i can before noise appear on this region (For example: 32). in the "brute force" tab you can take note of the new per pixel information. (using 32 should be 512/512).
4. So now i know the minimum number of secondary samples your scene need (512) and the maximum (4096) in order to get the level of noise i choose, every single secondary sample above 4096 will be a waste of time.
5.In order to let Vray move between this 2 values, use MSR=256 and raise the Adaptive amount until the "Brute force" "Per Pixel" tab says (512/4096). (In this example 0.75).
6.Focus again in the noisiest region of the scene and lower "noise threshold" until its clean (you need to use very low values in order to force Vray use the 4096 samples, but in the worst case scenario the max secondary samples still very low(4096)), or you can raise a little MSR to give Vray more space.
The beauty of this approach is that you don´t need to focus on where the noise comes from, because you are changing all the secondary subdivs at once using MSR and giving Vray enough space to use adaptive. The problem i have using Adaptive image sampler is the incredible big margins of adapivity i get, and the inevitable oversampling of the heavy AA zones of the image plus the tedious process of optimize every material/light individually. And in my personal tests i get better render times.
PD: Once more time excuse my English.
Very interesting reading. I'll be sure to give this methodology a go.
Comment