Announcement

Collapse
No announcement yet.

Maxwell 1.0

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    on monday, I couldnt see at all
    Dmitry Vinnik
    Silhouette Images Inc.
    ShowReel:
    https://www.youtube.com/watch?v=qxSJlvSwAhA
    https://www.linkedin.com/in/dmitry-v...-identity-name

    Comment


    • #32
      Originally posted by vlado
      Originally posted by stochastic
      I hope they dont give metroplis light transport / bi directional path tracing a bad name by all the inflated promises and market hype they did without living up to the promises, because mlt is still the highest quality way to go.
      I've actually come to the conclusion that MLT is most definitely *not* the way to go. Sure, it can handle some specific "difficult" situations better than standard qmc sampling, but it performs worse for the majority of other situations which are not that "difficult".

      If you think logically, for any given algorithm, you can always find another algorithm that knows more about the specific situation and gives better precision in less time. For example, if an algorithm knows in advance that caustics are going to appear in a certain place and will be caused by a certain object, it can calculate them faster than an algorithm that doesn't know it and must find them by trial and error. The absolutely best algorithm is the one that already knows the complete final result before calculating anything, and of course, if this was the case, we wouldn't need to calculate anything at all.

      It follows then, that MLT is certainly not the best algorithm out there. Other reasons why Metropolis sampling is not a very good algorithm:

      (*) It cannot use standard QMC sampling (this has been proven by theory). It needs pure random numbers. This means that for relatively easy situations (e.g. exterior daylight scenes) normal qmc sampling will be (a lot) better that MLT (less noise for the same calc. time).

      (*) MLT is a local adaptive algorithm - meaning that when it finds a difficult place, it "stays" there for while trying to resolve the difficulty. Then it moves on, forgetting all about it, until it happens to come across the situation again. This requires very little memory, but obviously it will be worse than a "global" adaptive algorithm which doesn't forget that easily. Further on, MLT may get stuck in a "difficult" place, ignoring other parts of the result, which are potentially more important.

      (*) The MLT algorithm has some parameters for which it is very difficult to find "good" values. These basically control how much time it spends in a difficult place, and by what step the algorithm moves around through the sample space. For some scenes, one set of parameters works well, while for other scenes, they give a worse result. Finding those values which for a given image produce the best results in the shortest times requires some test renders for the particular scene, which is what we are probably trying to avoid in the first place.

      (*) This is probably least important, but strictly speaking, MLT is not exactly an unbiased algorithm. It is unbiased only in the limit when you have taken many many samples. Therefore the intermediate solutions may be somewhat different from the final result. This is typically manifested as the image changing its brightness as the calculation progresses.

      (*) Last of all, MLT does not fit into "standard" rendering architectures - which is probably one of the reasons you don't see it used much. For most renderers, implementing MLT would require substantial redesign (possibly with an axe ) and will be at odds with the existing image rendering methods. More often than not, it is simply not worth it.

      About the only positive thing about MLT is that it is indeed a very elegant algorithm. Unfortunately, it doesn't mean that it is a practical one. As mentioned above, there are other adaptive algorithms that may be expected to perform better on average.

      Best regards,
      Vlado


      Well, you are the man, Vlado, this is very eloquently written. Elegant, but not practical, that sums it all up.

      These points seems beyond argument (it might also be philosophical), but one question:

      Is it (in your opinion) just an elegant singularity, a curious dead end road? Or can it be developed? For example, could mlt path mutations be done more efficiently? (For example what you mentioned about an algortithm knowing in advance) Or importance sampling? Or does the inherent logic of mlt essentially give it fatal limitations?

      Just curious - vray seems to be developing ppt, alot of freeware renderers have followed suit introducing path tracing options, but alot of the bigger players havent developed it in a serious way (mental ray has a path tracing option). Obviously it can't be used today in production, but does it have a future?

      Comment


      • #33
        thats vlado's workstation

        Dmitry Vinnik
        Silhouette Images Inc.
        ShowReel:
        https://www.youtube.com/watch?v=qxSJlvSwAhA
        https://www.linkedin.com/in/dmitry-v...-identity-name

        Comment


        • #34
          Heh, back when the Matrix was new, I had 3 computers and a laptop all running that screensaver.

          The really cool thing was that one of them was a Linux machine. With a little bit of help from a Linux guru friend, I figured out how to use a screensaver as a background... so I had the streaming matrix running as a desktop background. That really tripped people out.


          My desktop has been looking a bit like this recently.

          Comment


          • #35
            Dmitry Vinnik
            Silhouette Images Inc.
            ShowReel:
            https://www.youtube.com/watch?v=qxSJlvSwAhA
            https://www.linkedin.com/in/dmitry-v...-identity-name

            Comment


            • #36
              ha, did it just get nerdier in here

              Comment


              • #37
                Originally posted by stochastic
                Is it (in your opinion) just an elegant singularity, a curious dead end road? Or can it be developed? For example, could mlt path mutations be done more efficiently? (For example what you mentioned about an algortithm knowing in advance) Or importance sampling? Or does the inherent logic of mlt essentially give it fatal limitations?
                No, I don't think it's a dead end. Although little can be done to the MLT algorithm itself, research on similar algorithms (e.g. energy redistribution tracing) is continuing. Nevertheless, I still think that a global adaptive algorithm would in general perform better and would be a lot simpler to implement and support.

                Just curious - vray seems to be developing ppt, alot of freeware renderers have followed suit introducing path tracing options, but alot of the bigger players havent developed it in a serious way (mental ray has a path tracing option). Obviously it can't be used today in production, but does it have a future?
                I've answered this before - it is the future It's obvious that direct calculation methods give results of the highest detail and quality, the only issue is to make them practical. When V-Ray started its development six years ago, it would have been unthinkable to rely on ppt, because we would not have been able to compete with other renderers. We had to rely on biased methods, since they were the only practical ones. Things have changed since then

                Best regards,
                Vlado
                I only act like I know everything, Rogers.

                Comment


                • #38
                  Thanks Vlado

                  Comment


                  • #39
                    yes! dual dual dual dual quadral octat core
                    Dmitry Vinnik
                    Silhouette Images Inc.
                    ShowReel:
                    https://www.youtube.com/watch?v=qxSJlvSwAhA
                    https://www.linkedin.com/in/dmitry-v...-identity-name

                    Comment


                    • #40
                      Yeah And several small nuclear plants nearby
                      I just can't seem to trust myself
                      So what chance does that leave, for anyone else?
                      ---------------------------------------------------------
                      CG Artist

                      Comment


                      • #41
                        I saw a thread on cgarchitect where someone tried to use 40 nodes to do a still... shoot me if I ever need 40 nodes to do a rendering o 1 still.

                        Comment


                        • #42
                          Originally posted by cpnichols
                          I saw a thread on cgarchitect where someone tried to use 40 nodes to do a still... shoot me if I ever need 40 nodes to do a rendering o 1 still.
                          Yeah and it still had noise. 40 computers overnight (13 hours I think each). This wasn't Gollum it was a simple room.

                          Comment


                          • #43


                            well....I did a render not long ago, which was 10 hours on 22 cpus. Which would be 100 hours on one dual
                            Dmitry Vinnik
                            Silhouette Images Inc.
                            ShowReel:
                            https://www.youtube.com/watch?v=qxSJlvSwAhA
                            https://www.linkedin.com/in/dmitry-v...-identity-name

                            Comment


                            • #44
                              Originally posted by Sawyer
                              Originally posted by cpnichols
                              I saw a thread on cgarchitect where someone tried to use 40 nodes to do a still... shoot me if I ever need 40 nodes to do a rendering o 1 still.
                              Yeah and it still had noise. 40 computers overnight (13 hours I think each). This wasn't Gollum it was a simple room.
                              That makes 13hours x 40 CPUs (god forbid they were dualcore/dual proc).
                              Sounds like 520 hours, or a little short of 22 days of CPU time.
                              And still had noise?
                              What was the resolution of the image?
                              The poor guy must have tried a lifesize, 300 dpi render of a house...
                              Or the renderer doesn't converge, period.

                              Which one?

                              Lele

                              Comment


                              • #45
                                Originally posted by studioDIM
                                Originally posted by Sawyer
                                Originally posted by cpnichols
                                I saw a thread on cgarchitect where someone tried to use 40 nodes to do a still... shoot me if I ever need 40 nodes to do a rendering o 1 still.
                                Yeah and it still had noise. 40 computers overnight (13 hours I think each). This wasn't Gollum it was a simple room.
                                That makes 13hours x 40 CPUs (god forbid they were dualcore/dual proc).
                                Sounds like 520 hours, or a little short of 22 days of CPU time.
                                And still had noise?
                                What was the resolution of the image?
                                The poor guy must have tried a lifesize, 300 dpi render of a house...
                                Or the renderer doesn't converge, period.

                                Which one?

                                Lele
                                Lets just hope this guy's not trying to make a living from it....

                                Comment

                                Working...
                                X