on monday, I couldnt see at all
Announcement
Collapse
No announcement yet.
Maxwell 1.0
Collapse
X
-
Dmitry Vinnik
Silhouette Images Inc.
ShowReel:
https://www.youtube.com/watch?v=qxSJlvSwAhA
https://www.linkedin.com/in/dmitry-v...-identity-name
-
Originally posted by vladoOriginally posted by stochasticI hope they dont give metroplis light transport / bi directional path tracing a bad name by all the inflated promises and market hype they did without living up to the promises, because mlt is still the highest quality way to go.
If you think logically, for any given algorithm, you can always find another algorithm that knows more about the specific situation and gives better precision in less time. For example, if an algorithm knows in advance that caustics are going to appear in a certain place and will be caused by a certain object, it can calculate them faster than an algorithm that doesn't know it and must find them by trial and error. The absolutely best algorithm is the one that already knows the complete final result before calculating anything, and of course, if this was the case, we wouldn't need to calculate anything at all.
It follows then, that MLT is certainly not the best algorithm out there. Other reasons why Metropolis sampling is not a very good algorithm:
(*) It cannot use standard QMC sampling (this has been proven by theory). It needs pure random numbers. This means that for relatively easy situations (e.g. exterior daylight scenes) normal qmc sampling will be (a lot) better that MLT (less noise for the same calc. time).
(*) MLT is a local adaptive algorithm - meaning that when it finds a difficult place, it "stays" there for while trying to resolve the difficulty. Then it moves on, forgetting all about it, until it happens to come across the situation again. This requires very little memory, but obviously it will be worse than a "global" adaptive algorithm which doesn't forget that easily. Further on, MLT may get stuck in a "difficult" place, ignoring other parts of the result, which are potentially more important.
(*) The MLT algorithm has some parameters for which it is very difficult to find "good" values. These basically control how much time it spends in a difficult place, and by what step the algorithm moves around through the sample space. For some scenes, one set of parameters works well, while for other scenes, they give a worse result. Finding those values which for a given image produce the best results in the shortest times requires some test renders for the particular scene, which is what we are probably trying to avoid in the first place.
(*) This is probably least important, but strictly speaking, MLT is not exactly an unbiased algorithm. It is unbiased only in the limit when you have taken many many samples. Therefore the intermediate solutions may be somewhat different from the final result. This is typically manifested as the image changing its brightness as the calculation progresses.
(*) Last of all, MLT does not fit into "standard" rendering architectures - which is probably one of the reasons you don't see it used much. For most renderers, implementing MLT would require substantial redesign (possibly with an axe ) and will be at odds with the existing image rendering methods. More often than not, it is simply not worth it.
About the only positive thing about MLT is that it is indeed a very elegant algorithm. Unfortunately, it doesn't mean that it is a practical one. As mentioned above, there are other adaptive algorithms that may be expected to perform better on average.
Best regards,
Vlado
Well, you are the man, Vlado, this is very eloquently written. Elegant, but not practical, that sums it all up.
These points seems beyond argument (it might also be philosophical), but one question:
Is it (in your opinion) just an elegant singularity, a curious dead end road? Or can it be developed? For example, could mlt path mutations be done more efficiently? (For example what you mentioned about an algortithm knowing in advance) Or importance sampling? Or does the inherent logic of mlt essentially give it fatal limitations?
Just curious - vray seems to be developing ppt, alot of freeware renderers have followed suit introducing path tracing options, but alot of the bigger players havent developed it in a serious way (mental ray has a path tracing option). Obviously it can't be used today in production, but does it have a future?
Comment
-
thats vlado's workstation
Dmitry Vinnik
Silhouette Images Inc.
ShowReel:
https://www.youtube.com/watch?v=qxSJlvSwAhA
https://www.linkedin.com/in/dmitry-v...-identity-name
Comment
-
Heh, back when the Matrix was new, I had 3 computers and a laptop all running that screensaver.
The really cool thing was that one of them was a Linux machine. With a little bit of help from a Linux guru friend, I figured out how to use a screensaver as a background... so I had the streaming matrix running as a desktop background. That really tripped people out.
My desktop has been looking a bit like this recently.
Comment
-
Dmitry Vinnik
Silhouette Images Inc.
ShowReel:
https://www.youtube.com/watch?v=qxSJlvSwAhA
https://www.linkedin.com/in/dmitry-v...-identity-name
Comment
-
Originally posted by stochasticIs it (in your opinion) just an elegant singularity, a curious dead end road? Or can it be developed? For example, could mlt path mutations be done more efficiently? (For example what you mentioned about an algortithm knowing in advance) Or importance sampling? Or does the inherent logic of mlt essentially give it fatal limitations?
Just curious - vray seems to be developing ppt, alot of freeware renderers have followed suit introducing path tracing options, but alot of the bigger players havent developed it in a serious way (mental ray has a path tracing option). Obviously it can't be used today in production, but does it have a future?
Best regards,
VladoI only act like I know everything, Rogers.
Comment
-
yes! dual dual dual dual quadral octat coreDmitry Vinnik
Silhouette Images Inc.
ShowReel:
https://www.youtube.com/watch?v=qxSJlvSwAhA
https://www.linkedin.com/in/dmitry-v...-identity-name
Comment
-
Originally posted by cpnicholsI saw a thread on cgarchitect where someone tried to use 40 nodes to do a still... shoot me if I ever need 40 nodes to do a rendering o 1 still.
Comment
-
well....I did a render not long ago, which was 10 hours on 22 cpus. Which would be 100 hours on one dual
Dmitry Vinnik
Silhouette Images Inc.
ShowReel:
https://www.youtube.com/watch?v=qxSJlvSwAhA
https://www.linkedin.com/in/dmitry-v...-identity-name
Comment
-
Originally posted by SawyerOriginally posted by cpnicholsI saw a thread on cgarchitect where someone tried to use 40 nodes to do a still... shoot me if I ever need 40 nodes to do a rendering o 1 still.
Sounds like 520 hours, or a little short of 22 days of CPU time.
And still had noise?
What was the resolution of the image?
The poor guy must have tried a lifesize, 300 dpi render of a house...
Or the renderer doesn't converge, period.
Which one?
Lele
Comment
-
Originally posted by studioDIMOriginally posted by SawyerOriginally posted by cpnicholsI saw a thread on cgarchitect where someone tried to use 40 nodes to do a still... shoot me if I ever need 40 nodes to do a rendering o 1 still.
Sounds like 520 hours, or a little short of 22 days of CPU time.
And still had noise?
What was the resolution of the image?
The poor guy must have tried a lifesize, 300 dpi render of a house...
Or the renderer doesn't converge, period.
Which one?
Lele
Comment
Comment