Announcement

Collapse
No announcement yet.

How is image data transfer handled during rendering?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • How is image data transfer handled during rendering?

    I am having trouble finding any documentation for the flow of data at render time in the case of images used in materials.

    I do have some specific questions that I am looking for answers to if the subject is too broad:

    1. Is the entirety of the image data used in a scene transferred at render start to a local cache (RAM/Pagefile) or is the image data read and used directly from it's original location as needed, or maybe transferred to the local cache only as needed?

    2. If the image data is stored in a local cache, is that data kept and referenced until render end or is it flushed and re-transferred as needed?

    3. Does this behavior change based on the rendering methods(Local, DR, Backburner) and settings(such as when "Include Maps" is checked in a Backburner submission)?

    4. Do different render engines use different approaches to this, or are they all pretty much the same?
    Ben Steinert
    pb2ae.com

  • #2
    For textures, if that's what you mean, the scene itself stores just a file name. It is up to the user to ensure that all render servers can read that file.

    There are other ways in which this could be handled, for example for V-Ray 3.0 we coded automatic transfer of the textures as they are needed by the render servers and each render server can cache the textures locally. There are various options when to clear the local cache (it can be done at the end of the rendering, or you can set a limit for a certain disk size, or you can set it to a certain amount of days).

    For V-Ray for Rhino, we used to pack the scene along with all the textures before sending it to the render servers, however we recently switched to the method above as it worked more reliably.

    How other render engines are handling this, I don't know - you will have to ask.

    Best regards,
    Vlado
    I only act like I know everything, Rogers.

    Comment


    • #3
      Thank you for the response!

      My reason for asking was to determine how effective it is to store assets necessary for rendering on a local SSD as opposed to a network location. It seems that there are much less opportunities for latency if these assets are stored locally on an SSD, but for all I knew the fetching of assets could have been a drop in the bucket when it comes to what really has an effect on the speed of rendering. But it seems that those assets are loaded on-demand from the path stored with the scene once the buckets hit them if I understand your response correctly.

      I have set up my render farm with SSDs and a synchronized assets folder and never thought to consider the possibility that it was overkill or unnecessary, but a recent discussion made me think that perhaps utilizing those SSDs elsewhere would be of more benefit. The slaves were quite an upgrade over what we had previously, so they would have been considerably faster with or without the SSDs.

      After your response I feel that it is still a significant benefit to leave the SSDs in the render nodes.

      Thanks again
      Ben Steinert
      pb2ae.com

      Comment


      • #4
        1. If you use conventional texture methods, like png, tga, jpg then the way this works is your texture is read from network or disk location when render is executed. When first ray intersects the geometry which has that texture, the texture is then read from that location and loaded into ram. Of course if you run out of ram, for example if you have a lot of textures, they would then default to swap disk, which will take a long time to process.

        2. In 3ds max, the bitmaps are kept in ram after your first render. So the subsequent renders reuse the textures from ram in a single 3ds max session. Once max is reloaded so is the ram.

        3. When you include maps in BB, the textures are copied over to the render machine as a part of the pack. If you have a slow network, this may be the option. Ideally you would keep everything in one central location and have all machines read from there. Depending on your farm, you may need different configurations. 1-5 machines can read from single server disk. 5-15, could use a proper storage server such as raid. Anything over 20 I would recommend getting a professional server storage system like isilon.

        If you always copy the textures to local machines this means that any time you need to update the maps you will have to repeat the process as many times as you update which can be a problem.

        4. Most render engines use similar logic. There are some alternatives. For example using tiled textures, which only load certain parts of the map into memory and then unload them when that portion of the rendering is done. This is a more advanced method only needed in complex situations where ram limit is being exceeded.
        Dmitry Vinnik
        Silhouette Images Inc.
        ShowReel:
        https://www.youtube.com/watch?v=qxSJlvSwAhA
        https://www.linkedin.com/in/dmitry-v...-identity-name

        Comment


        • #5
          Another option to cope with big texture data is to use the WavGen Plugin: http://www.wavgen.com/index.php?rout...ry&path=65_123
          www.hofer-krol.de
          Visualization | Animation | Compositing

          Comment


          • #6
            Thanks for the further explanation Dmitry, I thoroughly understand how it works now and am confident that it is best to utilize the SSDs in the render nodes.

            Originally posted by Morbid Angel View Post
            If you always copy the textures to local machines this means that any time you need to update the maps you will have to repeat the process as many times as you update which can be a problem.
            We use a software called DSynchronize to automate these updates. Basically, we leave it up to each artist to click the update button if a file is added or changed, then it pushes only the updated data to each workstation, node, and a backup location on the server which itself is also backed up incrementally for damage control.

            Thank you as well, henning, for the link. I have seen software like this before, but the solution (in scope) always seems to eclipse what I need for my purposes. WavGen seems like a much more advanced form of mip-mapping, which I dabbled in a bit when modding Halo PC. I am dealing with machines that only have 16GB of RAM and for my simple needs I do not come near reaching the ceiling with them. I do think it is good to hang on to the knowledge of this software though, as someday all textures may be handled this way as it seems significantly more efficient.
            Ben Steinert
            pb2ae.com

            Comment


            • #7
              Originally posted by henning View Post
              Another option to cope with big texture data is to use the WavGen Plugin: http://www.wavgen.com/index.php?rout...ry&path=65_123
              Isn't that basically the same as using tiled EXRs?
              http://www.spot3d.com/vray/help/maya.../tiled_exr.htm
              Rens Heeren
              Generalist
              WEBSITE - IMDB - LINKEDIN - OSL SHADERS

              Comment


              • #8
                Originally posted by Rens View Post
                Isn't that basically the same as using tiled EXRs?
                It's supposed to be much better compression as it only stores the differences from one mip-map level to the next (or so I'm guessing).

                Best regards,
                Vlado
                I only act like I know everything, Rogers.

                Comment


                • #9
                  Thanks, good to know!
                  Rens Heeren
                  Generalist
                  WEBSITE - IMDB - LINKEDIN - OSL SHADERS

                  Comment

                  Working...
                  X