Announcement

Collapse
No announcement yet.

Has anyone tested arnorld vs vray ?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Documentation is one part of it, but I still think the setup could be simplified.

    Best regards,
    Vlado
    I only act like I know everything, Rogers.

    Comment


    • #32
      True, I've written a tonne of documentation for in house tools and processes and fairly often people just won't bother reading it - more intuitive, easier controls reduce the need for docs in that respect

      Comment


      • #33
        Originally posted by tylerART View Post
        I have a bit of experience using Arnold and I wanted to put in my 2 cents to some of the previous comments on this thread. Arnold is not an adaptive renderer, or so says their dev team. Every pixel is sampled at the same rate, and every pixel is sampled at a squared rate, therefore 2 samples =4, 4=16, 8=64 samples, etc.

        I'm a huge fan of VRay, but in all my tests, Arnold's MC GI is clearly MUCH faster than VRays is. I've done some side by side comparisons in the past I can dig up if anyone is interested. Arnold doesn't have any caching options for GI, so in a side by side test, often VRay comes out on top speed wise when using Irradiance Mapping, but this isn't a fair comparison, as in VRay you have to fix flicker, and caching options aren't accurate as you often have to smooth cached GI. This takes time to do, and I would take the slightly higher render times not to have to do this step. The problem is, MCGI is much slower in VRay even when you add in your time working with the cached GI.



        Vlado, if you can really optimize brute force MC rendering, I would LOVE to see this in a future release. I'm so tired of cached GI options, and I think this is the one leg up Arnold has over VRay.
        Just a pointer/question:
        When you say you did a side-by-side comparison, do you mean you set vray to trace without adaptivity?
        As by skipping the sample rejection scheme entirely, you can gain a humongous amount of speed in vray too.
        It's just very risky as it can get out of hand very very quickly if the user isn't extremely careful, hence the noise thresholding and early termination being the default approach.
        But i have used the concept (not the exact setup) in production with very measurable results.
        And as far as i know, which is not to say is the reality of things, it's not a method many people at all would venture using with vray.
        Incidentally, the idea was sparked exactly after studying Arnold a bit, and asking Vlado a couple of questions about some of the perceived (!) differences in performance in a couple of areas.

        Attached are two images with a purely diffuse material (18% gray to be precise), sun, sky, default exposure, 2.2 gamma sampling, 3 bounces of brute force GI.
        default, has every render setting at startup defaults (in max), but for the anti-aliasing filtering turned off (as we're comparing noises), the GI samples are 128, AA mode is aDMC 1-4.
        It traces 193M rays, split in 469K Camera Rays, 4M Shadow Rays (the sun has 8 subdivisions), and 191M GI rays. The region rendering takes 85.9 seconds.
        no-thresholding, by contrast, has 0.0 adaptive amount, 0.0 noise threshold, and 16 GI subdivs. AA is set to fixed 4 (aDMC has more thresholding still).
        It traces 169M rays, split into 491K Camera Rays, 4.7M shadow rays, and 149M GI Rays.
        The region rendering takes 76.8 Seconds, as expected given there are less rays traced.

        There's an 11% gain right there, for pretty much the same visual appearance.
        As the geometric (nooks and crannies, leading to multiple bounces to trace) and shading complexity (throw in any other glossy effect, dof, moblur, the lot) rises, this pattern grows, making the gains even more relevant.
        Of course, it's by no means child's play to convert old scenes just by changing render settings.
        A glossy reflection with 64 subdivs and the default 5 bounces would trace all the required rays for the fifth bounce with no adaptivity, the pixel getting a quintillion (1.152.921.504.606.846.976) rays.
        I'd rather not try that, and leave it to the thresholding to stop when it's right to do so... ^^

        I attached the straight up difference of the (default) - (no-thresholding), and the same gained 100 times.
        The zip file contains the 16Bit exr for those wanting to play with them.
        There is also a fairly noticeable difference in noise distribution, i suspect largely due to the different AA methods, although some may also come in from the adaptive tracing of the effects.

        I do not have any production experience with Arnold, although i heard about it from a few colleagues, so i'd rather not talk about it, nor get into a benchmark war, far from it:I am just very curious about different approaches, as they often come to my aid in tight situations at work, regardless of the engine I have to use.
        It seemed to me that this type of setup for Vray resembles what i heard of Arnold's workings closely enough.
        I'm all ears for corrections, from either Vlado (is adaptivity really off with the settings i used?), or anyone with direct knowledge of the Arnold engine.
        Attached Files
        Lele
        Trouble Stirrer in RnD @ Chaos
        ----------------------
        emanuele.lecchi@chaos.com

        Disclaimer:
        The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

        Comment


        • #34
          Hey guys,

          So I finally had the time to do some tests. See images below about a few renders. I also confirmed my settings with one other very smart individual from this forum who also knows arnold very well. It appears in my test that arnold is not faster in brute force sampling then vray. My tests are purely for research purpose I do not wish to state that one renderer is better then the other.

          The test below is lit by a single dome light using hdr. Important factors come in play. Both arnold and vray are configured as close as possible by settings. Gi depth is 10 in both for vray both bounces are brute force, reflecton/refraction is set to 5.

          Both arnold renders are about 5 minutes, while vray renders are almost the same 4 min for motion blur, and 3 min for non motion blur.

          While the non motion blurred render from arnold is close in both time and quality with vray, the motion blur one is quite a different story. I think this is a current limitation of arnold (motion blur) as well as glossy surfaces. No matter how high the sampling was set, there was no way to resolve the noise in both motion blur and glossy.








          Now the second test is a pure brute force test. There are two invisible walls added to the cube with one window opening, creating an extremely difficult case for importance sampling with brute force without use of portals. I had to raise samples for both vray and arnold until I got a cleaner result. Arnold in 43 minutes, vray in 33 minutes.





          Now to the issue of arnold being not adaptive. I've actually inquired about that. Arnold is adaptive, and uses adaptive dmc same as vray. However in my understanding the adaptive properties are not available to the user. What is drastically different is how you control the sampling. You have a general sample value called AA and everything else is multiplied against that. So lets say you have a render that's grainy and you wish to raise the samples a little bit and you cannot. All you can do is multiply by the integer value in the secondary multipliers.

          As a conclusion to my quick research, arnold is really similar to vray but uses different logic to apply sampling which may not be as flexible as vray. With that said, certain areas of arnold like motion blur or glossy reflections do not have custom sampling therefore cannot be resolved efficiently. Also notice that vray has nice refractive caustics in the scene, something arnold cannot do from a domelight (though the refractive caustics were enabled).
          Attached Files
          Dmitry Vinnik
          Silhouette Images Inc.
          ShowReel:
          https://www.youtube.com/watch?v=qxSJlvSwAhA
          https://www.linkedin.com/in/dmitry-v...-identity-name

          Comment


          • #35
            Importance sampling was always something that marcoss put on the long finger, he always wanted to improve the base mathematics first. In some ways while sony has been great to bring arnold along to a really high production level, maybe for smaller operations having a feature set driven by a company with massive computing resources can skew things a tiny bit?

            Comment


            • #36
              yeah "tiny". This is the logic in my opinion exactly. I actually use similar logic in my renders also. Recently developed a script which basically converges the settings for all vray attrs to best optimal values and just fires off to the farm But vray you can open the hood and do some tuning...with arnold it seems not as simple.
              Dmitry Vinnik
              Silhouette Images Inc.
              ShowReel:
              https://www.youtube.com/watch?v=qxSJlvSwAhA
              https://www.linkedin.com/in/dmitry-v...-identity-name

              Comment


              • #37
                I'd imagine unless there's some kind of statistics available though you'd still have to look at some test renders to isolate the noise and then make choices based on that - there's no intelligent method that works for everything, just good first guesses and tweaks afterwards?

                Comment


                • #38
                  You are correct. For every project every type of cg there are different settings. For what we do, we use mostly brute force on everything, I try to establish most optimal settings and then go under the hood to tweak the ones which create most noise. Though lately its been quite good for that Just throw a ton of samples at the problem and it goes away
                  Dmitry Vinnik
                  Silhouette Images Inc.
                  ShowReel:
                  https://www.youtube.com/watch?v=qxSJlvSwAhA
                  https://www.linkedin.com/in/dmitry-v...-identity-name

                  Comment


                  • #39
                    Figuring out if arnold really is adaptive shouldn't be too big of a deal.
                    Do as you would in VRay, see if noise grows or not under those conditions.
                    I think we experienced enough of those situations together in the five months of FD5...
                    Lele
                    Trouble Stirrer in RnD @ Chaos
                    ----------------------
                    emanuele.lecchi@chaos.com

                    Disclaimer:
                    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                    Comment


                    • #40
                      Here's the scene, I did time ago. Where I matched arnold and vray the best I could. In terms of time vs noise. Vray for maya and arnold scenes.
                      http://rghost.ru/45178797
                      Attached Files
                      I just can't seem to trust myself
                      So what chance does that leave, for anyone else?
                      ---------------------------------------------------------
                      CG Artist

                      Comment


                      • #41
                        What might be worth doing is sending three different sampling approaches to the farm and picking which comes out best - one favouring low aa / high samples on all else, another on high aa and low samples and a final one on medium aa (say 1/12) and medium samples?

                        Comment


                        • #42
                          oh, I was thinking of something a lot simpler, John.
                          Ignore the render settings completely.
                          Adaptivity becomes prominent, and often undesired, at extremely low lighting levels (within, or around, the noise threshold, to be precise).
                          So rendering an image with an arbitrary setting, lit to produce colours around 0.5 float, and the same, but to produce colours in or around the 0.01 float range (or lower, although it wouldn't really make much sense for it to be too low in Arnold, or other renderers), SHOULD lead to a faster, and grainier result.
                          If it takes the same amount of time, and it's just as noisy when normalised to the previous image, then you have no adaptivity to speak of.
                          Lele
                          Trouble Stirrer in RnD @ Chaos
                          ----------------------
                          emanuele.lecchi@chaos.com

                          Disclaimer:
                          The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                          Comment


                          • #43
                            Sorry Lele, should have quoted Dmitry there, was more a scripted approach to getting good settings. Must resume email chats on quality and theories though!

                            Comment

                            Working...
                            X