Announcement

Collapse
No announcement yet.

Fstorm render

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    @Ivan - I would suggest testing the other GPU ray tracers and use the one that suits you best. If something doesn't work, we usually try to fix it and every feedback is precious.
    Just to not leave people with wrong impression about RT GPU - keep in mind that some (or all) also don't have directionality, but also stuff like triplanar, car paint, rounded edges, light select with GI, cached GI, hair, particles, unlimited AOVs (you have to render multiple times), real time active shade without need to restart engine, non-pre baked sss (that uses ton of memory), layered material, physical camera, physically correct materials like GGX (not faked and wrong one), any shader language support, DR, denoiser, OpenCL (that works great on the new RX 480), some even don't have motion blur and many more. And RT GPU does (at the same time being on par of more often a lot faster). I can go on with the list, but I am not sure if there is a point in doing that.
    We have a lot of work to do, for sure. We have the procedural bump map support added in 3.4, but I agree we need better bump in general, matte material, fluids and we will get there. RT GPU is very stable, works very well for many and there is a lot of production videos and stills made with it. I also agree that the most frustrating thing is that we share UI with the Adv V-Ray version. This is why we have the * in the docs. Changing this will take more time. Usually it is not that something doesn't work in RT GPU and it does work everywhere else. It is that it doesn't work in RT GPU and it works in V-Ray Adv. Making V-Ray level raytracer is not the same thing as making a raytracer.
    I would not agree with the excuses part. Comparing 5 bounces versus a 100 is not excuse, it is like comparing a plane speed with measuring how much it takes to get form NY to Chicago vs from NY to LA. It is a different thing.
    @Donfarese - GTX 1080 is supported fully in V-Ray 3.4 and even in the 7 months old V-Ray 3.3. It is a bit faster than Titan X and it will be the case for most scenes for the other raytracers as well. Keep in mind that it is successor to 980, not to the Titan X, so it is a great result. CUDA 8 will change nothing specifically for GTX 1080, since the GP104 is very close to the Maxwell architecture and whatever CUDA 8 changes will affect the Maxwell GPUs as well.

    Best,
    Blago.
    V-Ray fan.
    Looking busy around GPUs ...
    RTX ON

    Comment


    • #92
      Originally posted by Donfarese View Post
      Here is the scene in 2105, 2016, 2017.
      Here is my test. I am comparing V-Ray CPU (regular V-Ray renderer, not V-Ray RT) on a super old i7-4771 @ 3.50 GHz against Redshift on a single GTX 980. V-Ray rendered in 10m 20s, whereas Redshift in 11m 45s. V-Ray is more or less default settings, BF+LC, bucket sampler, noise threshold 0.04; Redshift is BF+IC, 15 bounces, 64 GI rays, 4/4096 AA samples with adaptive error 0.01, max secondary ray intensity at 20.0.

      There are several notes to be made:
      *) The noise in the Reshift image is very uneven - some areas are very clean, others are quite noisy. To clean up the noisy areas, you'll need to fiddle with the settings leading to potentially higher render times;
      *) Redshift has artifacts on the curtains;
      *) The GTX 980 is not the fastest card out there, but neither is the CPU; there are CPU configurations that are easily 4-5 times faster. Of course, you can certainly stuff more and faster GPUs too into the machine.

      I've tested Redshift on full-blown interior scenes too with similar results. Overall, it seems to me that it has gotten slower in this latest release. I've been following the development for quite some time and it seemed much faster a while back. Don't get me wrong, Redshift is a good renderer and if it works for you, then go for it. But V-Ray is not too shabby either, especially considering that it will get even faster in the next builds.

      Best regards,
      Vlado
      Attached Files
      Last edited by vlado; 20-07-2016, 01:18 PM.
      I only act like I know everything, Rogers.

      Comment


      • #93
        Vlado that's not really good settings for Redshift, that's why the times are horrible. you shouldn't be using 4/4096 AA, you should be upping the BF num Rays to 2000, set the IC screen radius to 4 and AA should be 4/64 0.01. Also don't forget to add 3000 to the ray reserve memory. I don't know why you changed my settings to get a much worse time and image. The changes you made add actually less quality to the render, going from 5 to 15 bounces you wouldn't even notice the difference. you saw the time I got at 3min and it was really clean.

        Your major issue is way to high AA settings and way to Low GI Rays setting, the worst settings you can do for Redshift.
        Last edited by Donfarese; 20-07-2016, 01:27 PM.

        Comment


        • #94
          Ok; rerendering now...

          Best regards,
          Vlado
          I only act like I know everything, Rogers.

          Comment


          • #95
            Originally posted by vlado View Post
            Ok; rerendering now...

            Best regards,
            Vlado
            I'm not in front of Redshift right now so I'm just throwing out basically how it should be set up, I would assume you will see the render come in at around 4-5min and cleaner on a 980. From these we can tweak it a bit to get it to under 4min probably.

            Comment


            • #96
              Here are the renders with your suggested settings; I also played with the V-Ray settings a bit (min shading rate at 32 and LC retrace at 1.0). For Redshift this brought down the render time to 4m 50s. For V-Ray, it brought it down to 6m 10s. Just for fun, I also rendered the scene on one of our old dual Xeon machines, which came in about 2m 25s. I then lowered the noise threshold to 0.02 so that the render time was about the same as Redshift at 4m 35s.

              As you said, the V-Ray settings could be further tweaked slightly for even lower render times.

              Best regards,
              Vlado
              Attached Files
              I only act like I know everything, Rogers.

              Comment


              • #97
                Here is a slightly tweaked render (256 min shading rate and 8 AA subdivs to better match your Redshift settings) on the Xeon which came in at 4m 56s, compared to Redshift on the GTX 980 at 4m 50s.
                Best regards,
                Vlado
                Attached Files
                Last edited by vlado; 20-07-2016, 02:34 PM.
                I only act like I know everything, Rogers.

                Comment


                • #98
                  Hi Savage,

                  I get what you are saying, this is not "blackmail" like "If you do not implement this...".
                  I am not leaving anyone with wrong impression, i am NOT saying how everyone should change engines just like that, i am using vray(and probably i will continue using it for now), as customer i am saying what i am missing. Same as i found here on forum how other people express their opinion in similar fashion.
                  You guys are communicating with users, comparing between more render engines, you are trying to understand what "realistic" means, very commendable. You can talk about passes, bounces and god knows what, your knowledge about your engine surpases mine, that is something you do not need to prove!

                  I am just asking why would you wait?
                  Why can't you focus on your product and deliver those crucial things instead of telling us how everything is great. You are right, there is other engines that do not have what GPU_RT has, it is faster than some of the engines and everything you said is true to some degree, but again you are not Fstorm, your company is here 10+ years, i am wrong to expect from you not to fail, i am expecting Vray to evolve into current technology. I know this can't be instant, i just do not want to witness again how you are waiting for someone to rock your chair so you could make more fixes in past year than in last 5 years before that.
                  That is not fair deal for customers. Similar story was with CPU version and finally when everything was ironed out, it become almost obsolete(how many years are left for CPU??)

                  Even people that doesn't know anything about 3D are aware of Vray, MAX, Maya... currently there is only few companies out there that have such impact on CGI history as those few.
                  Take example of how Autodesk is giving BS and everyone sees that, and it is hard to tell them that, if you would ask someone in Autodesk what they think why things are like that, they would say there is nothing wrong with them.
                  They are industry standard for some time now, you are not able to use MAX on it's own(scary), it is serving more like a container to all those plugins you need to make it work, but they made MAX kinda necessary. That is tricky situation for customers, that doesn't give them immunity to critique though, but they use that opportunity to shaft us on every corner.

                  Those _____ engines that all people compare to Vray maybe do not compare right now directly in some things and in some they they are surpassing Vray but maybe if there wasn't for them we would still have RT as experimental option, who knows. So everything is intertwined and we accepted it.
                  Thing is that people invested in vray over the years(same as in MAX), model databases, materials, knowledge... Your company is already in advantage knowing that, i realize i am not the only one feeling a bit frustrated about certain development of things in industry.
                  Maybe that is why your answer is "I would suggest testing the other GPU ray tracers and use the one that suits you best" but it is not that simple.

                  Comment


                  • #99
                    Good job Vlado, yeah RT is probably around the same speed as well. With more tweaking you can get Redshift down quite a bit too. You can try something that I didn't think would work at first, Set the AA of that scene to 64/64 & adaptive error thresh 0.0, then see what time you get. What Xeons are you running it on, I doubt I could get near that time with my OC 3930k. I've chatted with them over at Redshift, they said they are working on core functions right now so hopefully even more speed. I talked to them about The way vray is setup, I do like having an easier method of setting up the scene like RT's "Max Noise", they said there working on some new auto method or something, can't remember all was late a few nights ago. Below is a quick fun test, Illuminating sofa with Platinum everything. Rendered in Redshift at 3min50sec.
                    Attached Files

                    Comment


                    • If I were to walk into a BMW dealership, raving about the Mercedes, I would expect them to try and convince me why the BMW is superior. If I am resistant, they would probably tell me to go test drive and buy the one I like best. Software, I suppose, is pretty much the same, no?
                      Bobby Parker
                      www.bobby-parker.com
                      e-mail: info@bobby-parker.com
                      phone: 2188206812

                      My current hardware setup:
                      • Ryzen 9 5900x CPU
                      • 128gb Vengeance RGB Pro RAM
                      • NVIDIA GeForce RTX 4090
                      • ​Windows 11 Pro

                      Comment


                      • I personally have a hard time seeing anyone doing things in a superior fashion to Vlado and his team, they seem to be on top on everything in the industry. Some of the smartest people on the block for sure.

                        Where I think it can sometimes differ from the competition is in their priorities, and how things are implemented I guess. I imagine that Vray has such a grip on the industry, and that therefore are so many specific requests from different studios, that a lot of time is spent tailoring to these needs. There seems to be a certain 'feature creep' where all sorts of options becomes available that I will never use, which is of course not to say that they aren't useful, but all the same it seems that other companies that are focusing more on usability and clever implementation of essential are taking serious advantage of this.

                        Comment


                        • Originally posted by glorybound View Post
                          If I were to walk into a BMW dealership, raving about the Mercedes, I would expect them to try and convince me why the BMW is superior. If I am resistant, they would probably tell me to go test drive and buy the one I like best. Software, I suppose, is pretty much the same, no?
                          I guess for some, but the rest of us are just looking to better each one. Why not buy a BMW and a Mercedes

                          Comment


                          • what is f stop render and why would anyone bother with it?

                            Comment


                            • It reminds me of my grandparents saying why do they need a smartphone when the one they have now work just perfect, now they are addicted to smartphones, reading newspapers etc on it, not to say vray is the old phone but its just saying its good to try new things, before vray was around other renderers were goign strong (idk what was because i started with vray and some others at that time) i bet when they came some were saying why use vray when what hey have already works lol, not 100 percent sure about this example but there must have been those people, they always are, But all said GPU rendering seems really interesting. one topic came up yesterday also just chatting with friends about how business and companies work, who knows if one day chaosgroup gets bought out by one of these other guys and no more vray? vlado is off somewhere on some island roasting with a few hot ladies and leave you guys stil talking about this and that. Dip your toes! vray might change to Vstorm one one day, hahaha, now am just messing
                              Last edited by mitviz; 20-07-2016, 08:31 PM.
                              Architectural and Product Visualization at MITVIZ
                              http://www.mitviz.com/
                              http://mitviz.blogspot.com/
                              http://www.flickr.com/photos/shawnmitford/

                              i7 5960@4 GHZm, 64 gigs Ram, Geforce gtx 970, Geforce RTX 2080 ti x2

                              Comment


                              • Originally posted by squintnic View Post
                                what is f stop render and why would anyone bother with it?
                                FStorm is a GPU renderer developed (at least in part) by some guys who used to work on Octane. You can find more information on its website here: http://www.fstormrender.com

                                Best regards,
                                Vlado
                                I only act like I know everything, Rogers.

                                Comment

                                Working...
                                X