I don't think I have used bucket rendering in two years. Why is it still a thing?
Announcement
Collapse
No announcement yet.
Can stuck bucket syndrome finally be fixed, please?
Collapse
X
-
Bobby Parker
www.bobby-parker.com
e-mail: info@bobby-parker.com
phone: 2188206812
My current hardware setup:- Ryzen 9 5900x CPU
- 128gb Vengeance RGB Pro RAM
- NVIDIA GeForce RTX 4090 X2
- ​Windows 11 Pro
- Likes 1
-
Originally posted by glorybound View PostI don't think I have used bucket rendering in two years. Why is it still a thing?
It's an option with value, certainly not a leftover.
In the specific case of the stuck bucket described above, even with progressive one would see V-Ray sit obstinately around the group of pixels, at each pass creating more fireflies that would be cleaned many passes later, while new ones were birthed.
Unless one left it for a long time, it'd definitely look like the image wasn't converging at all in those areas.
Lele
Trouble Stirrer in RnD @ Chaos
----------------------
emanuele.lecchi@chaos.com
Disclaimer:
The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.
- Likes 3
Comment
-
Originally posted by kosso_olli View PostI think we sent some scenes already. Do you need them again?
Regards
OliverIf it was that easy, it would have already been done
Peter Matanov
Chaos
- Likes 1
Comment
-
Just put a giant high-poly model with refraction glossiness and reflection glossiness in a small area of the frame and add a VRaySun. There's your test scene. It will take a VERY long time to do one tiny block.
Are you implementing something special? Trying to figure out why some simple test scene would not work. If you are somehow able to have the most complex block render first (via data from the light cache) so that they are done before the other blocks that might be helpful. Though you can still get stuck on them.
Comment
-
Originally posted by Joelaff View PostAre you implementing something special?
Trying to figure out why some simple test scene would not work.
If you are somehow able to have the most complex block render first (via data from the light cache) so that they are done before the other blocks that might be helpful.
If we knew in advance what was the hard bit, we'd know the answer to the question.
The LC, f.e., is heavily under-sampling a scene, and has as yet no idea of the noise threshold to be achieved.
Given rendering is a statistical affair, the LC would miss some of the troublesome areas by virtue of poor sampling of the problem domain (in other words, that glinting sixteenth of a pixel to be lowered to 0.005 noise threshold would not be picked up.).Lele
Trouble Stirrer in RnD @ Chaos
----------------------
emanuele.lecchi@chaos.com
Disclaimer:
The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.
- Likes 1
Comment
-
No doubt the light cache is far from a perfect method, but it’s an easy way to get at least some idea of what is going on in the scene.
No doubt it is a complicated problem. Parallel processing always is.
Eager to see what you folks come up with.
Comment
-
Originally posted by Joelaff View PostEager to see what you folks come up with.
Notice the settings are kept noisy so you can watch the divisions happen in realtime within a minute or two.
Also, the bucket size is chosen big on purpose because it's generally quicker than with smaller buckets (less border to render twice) and because it's easier to see the splitting.
The algo scales just as well with longer renders, with higher sampling and lower noise threshold.
It has some internal name, but i can think of it as either the "mutual aid bucket sampler", or the "game of life bucket sampler" (where samples are food.).
I find it mesmerizing, my OCD is very happy with Rado's work, and i keep abusing the many-core machine to see what a given frame will subdivide to.
The algorithm is elegantly simple and effective: the crux of it is that the CPUs will stay as busy as possible for as long as possible, adaptively, so regardless of screen content.
This is inherently change-resilient (animation, changes to lighting, geometry, textures, to the presence or not in any amount of DoF or Moblur, and so on...), it wastes as few CPU cycles as possible (doing clever stuff to not waste any work already done), and should nearly always result in a quicker time to a given noise threshold.
This is about the optimal that can be achieved without any loss of image qualities (gloss on shaders, light intensities, and so on.).
Work is ongoing and it's not yet ready for primetime.Last edited by ^Lele^; 18-01-2022, 03:16 PM.Lele
Trouble Stirrer in RnD @ Chaos
----------------------
emanuele.lecchi@chaos.com
Disclaimer:
The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.
- Likes 3
Comment
-
That's really cool looking! I just finished up on hawkeye and we had thousands of lights and lots of those hot little glints, this type of thing would likely make a big difference! I'd love a few more brutal cut off controls for lights that aren't realistic but would try to cut off fireflies at source too, while the correct version of things gets resolved in more elegant ways in the background!
- Likes 1
Comment
-
Originally posted by joconnell View PostI'd love a few more brutal cut off controls for lights that aren't realistic but would try to cut off fireflies at source too, while the correct version of things gets resolved in more elegant ways in the background!
Lele
Trouble Stirrer in RnD @ Chaos
----------------------
emanuele.lecchi@chaos.com
Disclaimer:
The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.
Comment
-
- Likes 1
Comment
Comment