Announcement

Collapse
No announcement yet.

1.5 preview: DR max6!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Seeing that distributed irradiance map pass almost made me pee my pants.

    I'm still drooling.

    Best of luck with the bug fixes in this version. I am looking forward to 1.5 even more, now!



    shane.

    Comment


    • #47
      Question,

      I'd assume that if one machine is rendering, and another attempts to render also that it wouldn't 'double' up on the slave? one at a time eh?

      Wish: With that, a 'priority' flag that could be set? mabye an ini text file listing a set of users (Machine names?) with their 'priority' that could be set on each render slave? this way you can sort of split up the load in the background without having to worry about who's who?

      reasoning: Artist A has a 50 priority on machine 1 through 5 and a 30 on the rest, Artist B has a 50 priority on machines 6 through 10.. and well.. you get the picture all listed in a little file that can be tossed wherever

      Artist A starts a test render.. it begins to use all of the 10 slaves available.. halfway through, artist B starts a render, since he's got 'dibs' on the latter half of the farm, once those buckets free from A's render, they begin in on his...

      Probably a bit messy on the setup side, but might make sense for multi-user envieronments?

      Cheers,

      Edit: or thinking about it, mabye its easier to just have a priority on the local ui that you can set per job sent, if the local ui can also show if a job is in progress currently and who owns it.. backburner-esque
      Dave Buchhofer. // Vsaiwrk

      Comment


      • #48
        Please excuse my ignorance, but I've never had the chance to render on a network before. I have always had only one computer. Soon I will be getting a second workstation to make my work more efficient....

        What does distributed render do, and what are its advantages? Do you need multiple licenses of Max? Its obviously a huge deal to a lot of you, so I figure I'd better get myself educated before I actually have a chance to use it.

        Sounds great though!......whatever it is!
        Tim Nelson
        timnelson3d.com

        Comment


        • #49
          Question,

          I'd assume that if one machine is rendering, and another attempts to render also that it wouldn't 'double' up on the slave? one at a time eh?

          Wish: With that, a 'priority' flag that could be set? mabye an ini text file listing a set of users (Machine names?) with their 'priority' that could be set on each render slave? this way you can sort of split up the load in the background without having to worry about who's who?

          reasoning: Artist A has a 50 priority on machine 1 through 5 and a 30 on the rest, Artist B has a 50 priority on machines 6 through 10.. and well.. you get the picture all listed in a little file that can be tossed wherever

          Artist A starts a test render.. it begins to use all of the 10 slaves available.. halfway through, artist B starts a render, since he's got 'dibs' on the latter half of the farm, once those buckets free from A's render, they begin in on his...

          Probably a bit messy on the setup side, but might make sense for multi-user envieronments?

          Cheers,

          Edit: or thinking about it, mabye its easier to just have a priority on the local ui that you can set per job sent, if the local ui can also show if a job is in progress currently and who owns it.. backburner-esque
          This is a very good point. Multi user situations like you describe is something that lots of us will run into. Geddart and I, among others, are currently thinking about some suggestions for how multi-user situations should ideally be handled. It's important that a DR render is added to a queue, since DR renders are usually needed immediately when you are tewaking a scene. I'm sure chaos also already have something planned that will take care of this.
          Torgeir Holm | www.netronfilm.com

          Comment


          • #50
            Please excuse my ignorance, but I've never had the chance to render on a network before. I have always had only one computer. Soon I will be getting a second workstation to make my work more efficient....

            What does distributed render do, and what are its advantages? Do you need multiple licenses of Max? Its obviously a huge deal to a lot of you, so I figure I'd better get myself educated before I actually have a chance to use it.

            Sounds great though!......whatever it is!
            You don't need any extra max or VRay licences to use VRay DR. Neither max nor VRay will need authorization to contribute to a DR or Backburner render. The benefit of a DR render is that all the machines in your network will be working on the same image. Like here, when I use DR in 1.45.78 on three computers in our network now, my rendering times per image are almost cut in half (the slaves I use to test with are each about half the MHz of my main workstation). Keep in mind that DR is still best for stills, and Backburner is still best for animation, though maybe DR can be used for precalculating an irradiance map now. I still haven't had time to test this because of deadlines and real work.
            Torgeir Holm | www.netronfilm.com

            Comment


            • #51
              I love hearing about new features, especially the ones Torgeir has described. I think private testing is essential in the early stages of a build, really until the build is fairly stable. I am looking forward to the 1.5 release and hope to be able to use it soon!

              However...

              A more prudent approach to informing the buying public of progress and previewing what is to come, might be for the information (and answers to questions) to come directly from Chaos Group or a designated spokesperson (like Torgeir, for example). I don't know if it is a good practise for it to be publicly known who is privately testing or for everyone who is privately testing to be able to divulge information about and publicly discuss the current build.

              There are good reasons for the public at large to be left out of the testing at this stage. There are also good reasons for the private testing to remain for the most part private. Perfectly nice people end up getting called names if they voice any disappointment (serious or otherwise) at not being privvy to the latest and greatest build. That could be avoided in a number of ways.
              You make some good points Fran. Regarding the practice of divulging information about what is in the new builds, I suppose we are not supposed to say anything without clearing it with the Chaos guys. In this case they gave me the go-ahead to post images and details about what was new in the latest private beta. I would think that the Chaos guys are too busy to be posting previews and answering many questions right now because they are focusing on finishing 1.5. It's hard to find the right balance between giving out information, and making people left out. The beta would do perfectly fine with no information published, but then others would miss the updates....

              As for who is actually testing the latest builds... I'm not sure who all of them are, but I know that a few of the long term customers who have previously had a good dialogue with Chaos Group have the latest build.
              Torgeir Holm | www.netronfilm.com

              Comment


              • #52
                Torgier - has there been any discussion about a backburner type network rendering manager that has the ability to modify any vray settings? It would be a great assett to have.

                Regards
                Chris Jackson
                cj@arcad.co.nz
                www.arcad.co.nz
                Chris Jackson
                Shiftmedia
                www.shiftmedia.sydney

                Comment


                • #53
                  Thanks for the explanation Torgeir. Now I am really looking forward to it!
                  Tim Nelson
                  timnelson3d.com

                  Comment


                  • #54
                    I'm curious about the SubD thing. First I am very excited because it is a critical thing to people in film as well as product design and characters.

                    My question is this: How is it subd'ing? There are several ways. First, is it even MAKING polygones or is it making it a "primitive?" If it is creating polygones, is it creating micropolies for the whole scene or based on camera? If so, is the user controling the detail on a per pixel basis? Basically, does it use the same system that the displacement stuff uses?

                    What about UVW mapping? What method will it "smooth" the mapping?

                    Comment


                    • #55
                      Basically, does it use the same system that the displacement stuff uses?
                      Yep, it's based on the same dynamic geometry system as the displacement and the fur; in fact currently this is a part of the displacement modifier.

                      What about UVW mapping? What method will it "smooth" the mapping?
                      The Loop subdivision creates a patch for each triangle of the original surface, and the mapping of each patch matches exactly the mapping of the original triangle. I may try to do an example later on.

                      Best regards,
                      Vlado
                      I only act like I know everything, Rogers.

                      Comment


                      • #56
                        Concerning rendertime Subdivisions...

                        Creating Subdivisions at rendertime has many advantages over the max meshsmooth method. Max' Solution is very "dumb" actually. It creates a static Subdivision grid before the actual rendering starts, no matter if we actually see the Object (maybe it's hidden) or not. The backside get's the same resolution as the front. The Size of the Subdivision Polygons always stays the same. If the camera is far away there are often more than enough polygons and for closeups it's not detailed enough sometimes. Another disadvantage is the memory that is needed to store all this Subdivision data.

                        Rendertime Subdivisions can solve all those Problems. Right now Vray's Loop Subd's is part of the VrayDisplacement Modifier and it has exactly the same options for Tesselation. (We testers would like to see an own modifier) The geometry Vray is creating is like displacement "dynamic", what means that small triangles get created per bucket and based on the visibility of the Object (or a part of it). So when no ray hits a Subd surface (Camera, GI, Reflection, Refraction rays that is) the Geometry is not even created! The egdelength of the micropolygons depends on pixels. (All this is the same for VrayDisplacement) So Vray's way of handling it is just smart. It takes far less memory and renders faster most of the time. The heavier the underlying geometry becomes the faster VraySubd's are compared to meshsmooth. Some Scenes should be even renderable with VraySubd's while they're not with meshsmooth due to memory problems.
                        Rendertime Subd's behave very cool in Animation too. When the camera approaches an Object you can actually see the render primitives go up. You don't see any "plopping" or flipping from when the edgelenth is small enough. (The default 4 Pixel work fine)
                        I will show some examples of rendertime Meshsmooth soon.
                        Sascha Geddert
                        www.geddart.de

                        Comment


                        • #57
                          Ok. I won't flame, but want to explain - I know 145.xx is buggy, I know that it shouldn't be used in production (actualy it can be ).. etc.!.

                          But just want to use this new piece of software. Just for testing purposes... I don't have much free time to thoroughly test this build, but it just veery exciting. So - maybe u should make this build available to public ? To excite the ppl To make 'em think, that VRay is getting closer and closer to 1.5 To prove this

                          P.S. I know ur answers, I realise your reasons... BUT STILL WANT THIS BUILD TO BE AVAILABLE TO PUBLIC

                          Over and Out
                          I just can't seem to trust myself
                          So what chance does that leave, for anyone else?
                          ---------------------------------------------------------
                          CG Artist

                          Comment


                          • #58
                            My question is this: How is it subd'ing? There are several ways. First, is it even MAKING polygones or is it making it a "primitive?" If it is creating polygones, is it creating micropolies for the whole scene or based on camera? If so, is the user controling the detail on a per pixel basis? Basically, does it use the same system that the displacement stuff uses?

                            Here's an example of moving the camera closer to a Loop Subdivision object. Look at the primitives (triangles) count in the frame stamp.

                            Torgeir Holm | www.netronfilm.com

                            Comment


                            • #59
                              The Loop subdivision creates a patch for each triangle of the original surface, and the mapping of each patch matches exactly the mapping of the original triangle. I may try to do an example later on.

                              Best regards,
                              Vlado
                              The mapping stuff can be tricky with SubD's for anyone that has ever used it. I look forward to seeing some examples.

                              Also, in renderman, the artist has the ability to inbed crease information in edges and vertices when using Subd's that get evaluated at rendertime. This can be a major pain to control but is still useful. Will we be able to add crease information or will it relie on adding extra geometry to create a crease? Or will it be able to read crease information from the subd node in MAX (or Maya... just dreaming here).

                              Personally I think that sometimes it is better to just add the extra geometry but exact control would be cool too.

                              Comment


                              • #60
                                Hey thanks for all the great info guys.

                                Chris, maybe I'm not understanding what you're saying but wouldn't crease information just translate to camera dependant displacement?
                                PS. Do you have popups blocked? I sent you a PM a while back.

                                --Jon

                                Comment

                                Working...
                                X