Announcement

Collapse
No announcement yet.

A guide to reducing RAM usage in V-Ray

Collapse
This is a sticky topic.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by super gnu View Post
    can i suggest this, and any other similar guides, are added to the help files?
    1+

    I'd love to see such threads more often. Very helpfull stuff.
    German guy, sorry for my English.

    Comment


    • #32
      Originally posted by Vizioen View Post

      So it could be an option that Vray at the start of a render enforces the max FB to 1x1 and when render is completed restores the previous resolution?
      mmmh, could be, but it could also have unintended consequences when, f.e., one renders over a farm.
      It'd have to be tested, it's a good idea, i think.
      I can very likely come up with a pre/post render script set that one could use to do so, and that could work as proof of concept.

      Thank you for the idea, i'll take a look!
      Lele
      Trouble Stirrer in RnD @ Chaos
      ----------------------
      emanuele.lecchi@chaos.com

      Disclaimer:
      The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

      Comment


      • #33
        Originally posted by ^Lele^ View Post

        mmmh, could be, but it could also have unintended consequences when, f.e., one renders over a farm.
        It'd have to be tested, it's a good idea, i think.
        I can very likely come up with a pre/post render script set that one could use to do so, and that could work as proof of concept.

        Thank you for the idea, i'll take a look!
        Great to hear Lele. Thanks for listening to (reading) our ideas.
        A.

        ---------------------
        www.digitaltwins.be

        Comment


        • #34
          Oh, what a nice thread - I've already asked about the math, behind instancing and still kinda lost.

          Can somebody scatter 16million of instances (I used stanford bunny) along the plane with random scale and rotation (one axis) ? I just did, and here's the results:

          forest pro - 40gb
          vrayscatter 22gb
          clarisse 3.1gb

          Tho, clarisse render twice slower (me kind of a noob there )

          Attached file is about forest pro 6.1.1 and latest vr4. How to optimize those scattering plugz, to beat clarisse ? I
          I just can't seem to trust myself
          So what chance does that leave, for anyone else?
          ---------------------------------------------------------
          CG Artist

          Comment


          • #35
            Paul, I wonder how the native vrayinstancer object would also do? There's definitely some room in forest for improvements! Clarisse is an utter beast of an engine but the pictures that come out suck at the minute

            Comment


            • #36
              Originally posted by Paul Oblomov View Post
              Oh, what a nice thread - I've already asked about the math, behind instancing and still kinda lost.

              Can somebody scatter 16million of instances (I used stanford bunny) along the plane with random scale and rotation (one axis) ? I just did, and here's the results:

              forest pro - 40gb
              vrayscatter 22gb
              clarisse 3.1gb

              Tho, clarisse render twice slower (me kind of a noob there )

              Attached file is about forest pro 6.1.1 and latest vr4. How to optimize those scattering plugz, to beat clarisse ? I
              Assume they are optimised at best they can be living inside Max.
              Forest may need the added ram to be able to create variants, vrayscatter may not.
              Either way, scattering performance isn't a topic in this thread, so it'd be nice to everyone else if you started a specific one.
              Last edited by ^Lele^; 01-08-2018, 12:27 AM.
              Lele
              Trouble Stirrer in RnD @ Chaos
              ----------------------
              emanuele.lecchi@chaos.com

              Disclaimer:
              The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

              Comment


              • #37
                Thanks Lele !
                Really informative subject .

                -------------------------------------------------------------
                Simply, I love to put pixels together! Sounds easy right : ))
                Sketchbook-1 /Sketchbook-2 / Behance / Facebook

                Comment


                • #38
                  Hi Lele,

                  Thank you for a great article. It certanly did clarify many things regarding how voxels and tiled textures work. I just have a question related to Vray mesh export and "Export point cloud" option.
                  When it apeared for the first time I thought that it's going to be one of the best Vray features ever for people who work with big scenes.
                  Unfortunatelly, I could never make it work. As I understand it is supposed to siplify geometry based on proximity from camera but in my case jump was always so sudden that it was useless to use in production. I experimented with hundrets point sizes but could never make it work.Is is unit or screen size based?

                  Can you explain how it works and how to fully use its benefits.

                  (BTW I'm on Vray 3.60.03, Max 2014)


                  regards

                  Zoran3d

                  Comment


                  • #39
                    The Point-Cloud option is an additional mesh abstraction, where the average properties of a cluster of faces will be written onto an oriented disk (the "point" in the name).
                    The structure is written directly into the Proxy, and as such isn't aware of any camera parameter: the point size is expressed as percent of the object's bounding box size.
                    Because of this, it's not viable for automated use: the user needs to know what is being proxied, and how small the smallest detail to be represented is, compared to the size of the whole object.
                    The same applies when the proxy + point cloud representation get loaded: turning the PC display on won't do anyone much good without tweaking the "Level Multiplier" option to properly suit that proxy's distance to the observer.
                    So, say you have a very detailed hero tree in the many millions of polys range.
                    If you wanted to reuse it as an instance in a distant forest, as well, the point cloud option would help: you'd make a copy of the hero tree proxy, and for distant forest ones you'd turn on the Point representation, and set a suitably low LoD (by raising the "Level multiplier" parameter.).

                    What one cannot do is expect a point representation to smoothly interpolate across LoDs (do not, in other words, animate the "Level multiplier": results won't be pretty, but will be funky.), and much less so be too close to the original geometry.
                    While it could well be made identical to the source geometry (with a suitably tiny disc size) the exercise would defeat its own purpose by generating tons of new geometry as oriented disks, instead of triangles.
                    The proxy size on disk, its loading time, and the memory occupation wouldn't then be optimal.

                    Lele
                    Trouble Stirrer in RnD @ Chaos
                    ----------------------
                    emanuele.lecchi@chaos.com

                    Disclaimer:
                    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                    Comment


                    • #40
                      Lele,
                      Thanks for the quick reply.Nicely explained.Now things are more clear ( I think he he..)

                      -So, when exporting Lowest Level Point Size is the smallest particle/disk diametar expressed as a percentage of bounding box.
                      -Level multiplier is basically a trigger to switch on to PC, based on distance from camera.
                      -It's an iterative process where one'd need to do export/import/test render untill "sweet spot" is found?
                      -If that is the case, would it be possible to code some sort of multiple lowest level point sizes (simmilar to mip-mapping) which'd give more flexibility when using PC? Something like: I want to export this mesh, 4 levels, from 0.02-2.0?

                      Cheers
                      Zoran3D

                      Comment


                      • #41
                        -) yes
                        -) No, the switch is above the spinner. Level multiplier changes the point size, so that at 1.0 is as it was saved, at 10.0 is ten times less pointd, and ten times bigger, and so on.
                        -) more or less but one needs just one save -to test- at low enough point scale, to then multiply the level up until it suits the scene. The one drawback is to maybe wanting to resave at higher point size/lower density to not burden the files on disk with needless detail
                        -) This could already be done exporting at 0.2, and then multiplying (Level multiplier) up to 10.
                        Lele
                        Trouble Stirrer in RnD @ Chaos
                        ----------------------
                        emanuele.lecchi@chaos.com

                        Disclaimer:
                        The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                        Comment


                        • #42
                          Thanks Lele

                          Comment


                          • #43
                            This topic is very helpful; it should be stickied alongside the top posts in yellow.

                            Questions on VRayHDRI loader and tiled textures (and using V-Ray 3.6 if there is any difference vs. V-Ray Next):

                            1) OK, V-Ray chooses the best size resolution to use with mipmap-sized Tx or EXRs. But when using a jpg, tif, png, or other format, is there any RAM savings at all between using a mipmap-sized texture (such as 1024 x 1024) versus a non-mipmap size texture (such as 1503 x 734)?

                            2) If V-Ray does not reduce mipmap-sized jpg, tif, or png textures to save RAM like it does for tiled Tx or EXR textures, are there any RAM negatives with reducing the blur value to 0 with jpg/tif/pngs as there are with tiled EXRs? Is a blur of 0.01 still a no-no?

                            3) What are people's recommended workflows to get a mipmap-sized texture when your original texture is some odd rectangular size? If you are constantly in Photoshop tweaking and adjusting the texture, I would naturally want to view and adjust it at true aspect ratio rather than squishing or stretching an odd rectangular size into a square mipmap resolution. Do people simply bite the bullet, work in PSD with an incorrect aspect ratio that matches a mipmap, and then stretch the proportions back within the HDRI Loader, or instead work in PSD at true aspect ratio, then constantly have to stretch the image before saving out of PSD every time?

                            Thanks,
                            Matt

                            Comment


                            • #44
                              Originally posted by FlynnAD View Post
                              This topic is very helpful; it should be stickied alongside the top posts in yellow.
                              Will ask.

                              Questions on VRayHDRI loader and tiled textures (and using V-Ray 3.6 if there is any difference vs. V-Ray Next):
                              There's no substantial difference.

                              1) OK, V-Ray chooses the best size resolution to use with mipmap-sized Tx or EXRs. But when using a jpg, tif, png, or other format, is there any RAM savings at all between using a mipmap-sized texture (such as 1024 x 1024) versus a non-mipmap size texture (such as 1503 x 734)?
                              The mipmap process doesn't need necessarily a square image. V-Ray takes care of it for you.
                              There should be a memory saving, as such, if the texture resolution to be loaded isn't the highest one.
                              Even if the highest resolution, or a non-mipmapped image, was loaded, the filtering process in VRayHDRI maps is optimised for speed, quality and memory usage, so it should still provide for a better overall experience.

                              2) If V-Ray does not reduce mipmap-sized jpg, tif, or png textures to save RAM like it does for tiled Tx or EXR textures, are there any RAM negatives with reducing the blur value to 0 with jpg/tif/pngs as there are with tiled EXRs? Is a blur of 0.01 still a no-no?
                              let me quote myself (horrible habit.):
                              https://forums.chaosgroup.com/forum/...38#post1005238

                              Notice that setting Blur to 0.01 will force only the highest map resolution to ever load, and deactivate filtering, negating memory saving, degrading image quality, and increasing render times.
                              Should it not be clear yet: DON'T EVER set blur to 0.
                              Notice 0.01 or 0.0 are identical conceptually (in max, at the least, where inputting 0.0 in blur caps it at 0.01).

                              3) What are people's recommended workflows to get a mipmap-sized texture when your original texture is some odd rectangular size? If you are constantly in Photoshop tweaking and adjusting the texture, I would naturally want to view and adjust it at true aspect ratio rather than squishing or stretching an odd rectangular size into a square mipmap resolution. Do people simply bite the bullet, work in PSD with an incorrect aspect ratio that matches a mipmap, and then stretch the proportions back within the HDRI Loader, or instead work in PSD at true aspect ratio, then constantly have to stretch the image before saving out of PSD every time?
                              See point 1. There's no need to worry about a thing.
                              Lele
                              Trouble Stirrer in RnD @ Chaos
                              ----------------------
                              emanuele.lecchi@chaos.com

                              Disclaimer:
                              The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                              Comment


                              • #45
                                I downloaded the maketx_1.4.12 zip from the link on page1. The zip file has an .exe and three .dll's. Double-clicking on the exe opens a Windows command prompt window for a split second, closes it, seemingly like nothing happened. No dialog box or anything. Do the exe and dll files get dropped into the C:\Windows\Program...3dsmax\plugin folders or somewhere else in order to use them? Dragging and dropping them into 3dsmax did not do anything. Nor did right-clicking the exe and trying to run it as an administrator.

                                Thanks,
                                Matt

                                Comment

                                Working...
                                X