Announcement

Collapse
No announcement yet.

Vray Proxy Workflow - Very Slow Startup times, need to resolve..

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Vray Proxy Workflow - Very Slow Startup times, need to resolve..

    Hi there,

    We are experiencing very slow startup times with our 3ds Max file containing VRay Proxy objects. 26 minutes Startup time on our farm (and workstations) for the first load and 3 - 4 minutes render time after load so the render time is not the issue, just the file startup time.

    Same for our artists loading the 3ds Max file, its 20 - 25 minutes to open.

    The scene file contains 16,000 VRay Proxy objects spread across 5 Layers with only ever 1 Layer visible at once, so 3,200 Proxy Objects on screen at any one time.

    We are using 3ds Max 2023 and VRay 6 Hotfix 3.

    The Proxy workflow is quite important to us to keep the core chunk of data we are rendering in a single place and to have it referenced into the lighting files. We have chosen the 'One Proxy per Object' route for maximum flexibility but didn't really expect this level of slow down. These animations are 3 - 4 minutes per frame to render but there is a serious delay at the beginning.

    Really looking for help on how to identify the cause and hopefully resolve the slow startup times.

    Thanks in advance

  • #2
    Are the proxies over the network? Is it much faster if they are local? You could use a push script or software (e.g. rsync or GoodSync) to send the resources to each node if that does help.

    It does sound like there ought to be some other solution, though.

    Comment


    • #3
      Hi,

      Yes, they are over the 10Gbe network which is what we use across all projects.

      I have managed to speed things up by telling our render pass submission pipeline to only include visible objects in the file per pass, rather than keeping all hidden layers too. This has actually brought the Startup times down from 25 minutes to under 5 minutes.

      It seems like even hidden VRay Proxy objects are automatically being loaded during startup?

      Comment


      • #4
        Hello,

        It would help if you could send one such scene together with the proxies to our support so we can profile it and see what we can do to speed up opening. 25 minutes is definitely a lot..
        Other than that - as Joelaff already mentioned - can you test if having all those proxies locally makes any difference. Even if the network is 10Gbe it could be an issue when doing many many small reads..

        Best regards,
        Yavor
        Yavor Rubenov
        V-Ray for 3ds Max developer

        Comment


        • #5
          Hi there,

          I can do that, what is the best method to send it?

          Working locally is not an option for us.

          Many thanks

          Comment


          • #6
            You can submit a request here https://support.chaos.com/hc/en-us/requests/new and there you will have an option to upload big files if needed.
            Yavor Rubenov
            V-Ray for 3ds Max developer

            Comment


            • #7
              Originally posted by brobins View Post
              Hi there,

              I can do that, what is the best method to send it?

              Working locally is not an option for us.

              Many thanks
              Regardless of the outcome of any dev investigation and any further optimisation, you should consider local caching to the rendering machines.
              Adding a single quick drive to each machine can shave years of wasted network comms time.
              Notice this is a known problem over networks (that of many small reads as opposed to monolithic, big ones.) that has no practical solution, besides caching.
              It happens with xMeshes and with tiled textures, when those are being cherrypicked for their smallest mips.

              The caching can be awfully simple, and it's something also included in (free) render farm managers.
              Lele
              Trouble Stirrer in RnD @ Chaos
              ----------------------
              emanuele.lecchi@chaos.com

              Disclaimer:
              The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

              Comment


              • #8
                Hi there,

                We use Deadline to manage our farm so I am sure its possible. Will have a look into it thank you.

                Comment


                • #9
                  Originally posted by ^Lele^ View Post
                  Regardless of the outcome of any dev investigation and any further optimisation, you should consider local caching to the rendering machines.
                  Adding a single quick drive to each machine can shave years of wasted network comms time.
                  Notice this is a known problem over networks (that of many small reads as opposed to monolithic, big ones.) that has no practical solution, besides caching.
                  It happens with xMeshes and with tiled textures, when those are being cherrypicked for their smallest mips.

                  The caching can be awfully simple, and it's something also included in (free) render farm managers.
                  Hi there,

                  After a quick look I have not been able to find anything regarding local asset caching for 3ds Max jobs. Do you happen to have any information on this?

                  Thank you

                  Comment


                  • #10
                    Umh, i am not current, i *assumed* it was present in deadline 10 given this thread on their forums.
                    It seems it's implemented for maya alone, so you'd have to write your own pre-render process to distribute assets.
                    • Use Local Asset Caching: Enabling local asset caching can reduce network overhead. When enabled, Deadline stores a local copy of all assets used during Maya renders on the Worker. The renderer then uses the assets from the local machine, rather than having to access them from their network location. The assets are stored in the Worker’s cache and can be used by subsequent frames or scenes submitted with the same assets without having to reach out to the network.
                    ​
                    Which is likely for the best, anyway, as only you will know the precise requirements for a file overwrite or a skip (something that can very much impact overall efficiency, as many assets end up being reused across shots and jobs.).
                    There may already be something, an app or a script, doing it.
                    Otherwise, you could theoretically try and use DR while submitting frames, maybe with two clients each, as it does have an asset transfer and reuse mechanism.
                    But it'd have to be tried, i've no idea of how you are configured, and how this would work, or not, in your case, i'm afraid.
                    Last edited by ^Lele^; 06-06-2023, 01:24 PM.
                    Lele
                    Trouble Stirrer in RnD @ Chaos
                    ----------------------
                    emanuele.lecchi@chaos.com

                    Disclaimer:
                    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                    Comment


                    • #11
                      Originally posted by yavor.rubenov View Post
                      You can submit a request here https://support.chaos.com/hc/en-us/requests/new and there you will have an option to upload big files if needed.
                      Hi there,

                      This was submitted yesterday, I believe.

                      Should you require anything further please let me know.

                      Thanks

                      Comment


                      • #12
                        This is me just thinking out loud, but could you try caching this out to vray scene perhaps and seeing if this is better to load that on the farm.

                        You do have quite the case though for the proxies, max generally and other software is not great at handling thousands of anything, are those unique objects or instances? do you have the proxies all loaded directly in the scene or are they xref or other type of refs into a main scene?
                        Dmitry Vinnik
                        Silhouette Images Inc.
                        ShowReel:
                        https://www.youtube.com/watch?v=qxSJlvSwAhA
                        https://www.linkedin.com/in/dmitry-v...-identity-name

                        Comment


                        • #13
                          Originally posted by Morbid Angel View Post
                          This is me just thinking out loud, but could you try caching this out to vray scene perhaps and seeing if this is better to load that on the farm.

                          You do have quite the case though for the proxies, max generally and other software is not great at handling thousands of anything, are those unique objects or instances? do you have the proxies all loaded directly in the scene or are they xref or other type of refs into a main scene?
                          The vrscene idea might be worth a test, I will try and do that this week.

                          all the proxies are loading directly into the main max scene and spread over 5 layers, approx. 3000 on each layer and only one layer is ever visible at one time for rendering. Think is each layer as its own render pass.

                          There are no instances, all individual objects.

                          Saving the max file with only one layer loads in approx 5 minutes. Saving the max file with all 5 layers loads in approx 25 minutes so it does seem to be a matter of 'the more you have in the file the slower it is to open'.

                          Comment


                          • #14
                            15,000 files is also a number where putting these files into multiple directories (if you haven't already-- something like 1000 items per directory max) will likely speed up the file accesses.

                            Also, consider configuring your fileserver (assuming *nix) to be case sensitive, which helps with many files in a directory vs. being case insensitive as, e.g. NTFS, normally is.

                            You could also try to using bonding/link aggregation (LACP) for your fileserver to be able to use multiple 10Gbit links, or upgrade it to 25 or 40 or 100 Gbit if your switch supports any of those. Obviously, this is a much bigger tweak, but with many benefits for the future.


                            Also, Deadline has a function that will remove non-rendering assets from the .Max file that is saves for rendering. I would give that a shot as well if you have not already. (I think that may be what you refer to in post #3)

                            I bet you will see a significant speedup from Morbid Angel's suggestion, though.

                            Please keep us updated; would love to hear how things work out for such a scene.

                            Comment


                            • #15
                              Originally posted by Joelaff View Post
                              15,000 files is also a number where putting these files into multiple directories (if you haven't already-- something like 1000 items per directory max) will likely speed up the file accesses.

                              Also, consider configuring your fileserver (assuming *nix) to be case sensitive, which helps with many files in a directory vs. being case insensitive as, e.g. NTFS, normally is.

                              You could also try to using bonding/link aggregation (LACP) for your fileserver to be able to use multiple 10Gbit links, or upgrade it to 25 or 40 or 100 Gbit if your switch supports any of those. Obviously, this is a much bigger tweak, but with many benefits for the future.


                              Also, Deadline has a function that will remove non-rendering assets from the .Max file that is saves for rendering. I would give that a shot as well if you have not already. (I think that may be what you refer to in post #3)

                              I bet you will see a significant speedup from Morbid Angel's suggestion, though.

                              Please keep us updated; would love to hear how things work out for such a scene.
                              Hi there, thanks for your input.

                              The vrmesh files are stored across 5 directories currently, with approx 3,000 in each directory. I could try testing breaking it down further.

                              Regarding Deadline, yes that is the feature I am using and relying on atm, it helps massively and tbh makes sense. Why should we keep data in the render job that is not required for that particular job. It has some draw backs with are minor such as for example, if the shot camera is hidden it will not be saved with the job file and the job will fail. As I say, this is a minor consideration.

                              Comment

                              Working...
                              X