Announcement

Collapse
No announcement yet.

The 64-bit version is also freeze in large output resolution.

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • The 64-bit version is also freeze in large output resolution.

    With the VRayRhino5 1.50.216 -64bit version,

    If rendering output resolution exceeds 15000 x 12000 pixel, it will Rhino5 freeze.
    There are 12GB of system memories.

    17000 x 13000 = freeze
    15000 x 12000 = freeze
    14000 x 13000 = freeze

    14500 x 12500 = ok
    14000 x 12000 = ok
    13000 x 12000 = ok

    If saved by raw image, there is no problem.
    But it is not the point at issue in question.

    Please check and fix.

    OakCorp Japan - Yuji Yamauchi
    oakcorp.net
    v-ray.jp

  • #2
    The VFB is displaying a raw image with no compression at that resolution for each channel. So an image at 14,000 x 13,000 would have 182,000,000 pixels. Each pixel uses one byte per RGB channel I believe, so that would be 546,000,000 bytes (approx 520MB) per VFB channel (RGB, Alpha, Z-Depth, etc) that you are rendering (someone please correct my understanding of raw images if I'm mistaken). So that's just the memory that is being used to display the image. On top of that, there is quite a bit of processing going on in the background which can take up a lot of memory (depending upon the scene), and then the host application itself is going to use some memory to run. Your operating system likely takes up about 1gb of memory to run without any apps open, and then any other apps you have open are going to take up more memory too. So you could easily gobble up about 5gb of ram with just your OS and your render going. Have you watched your task monitor to see how much available memory there is (performance tab of the task manager) when the freeze occurs? If it's close 12gb, then the freeze is occurring because you are out of memory that we are able to access and use properly. That's why the option to render directly to disk exists, to alleviate that heavy ram hit from the VFB storing all of that raw image data in memory.

    If it's not anywhere close to using 12gb when the freeze occurs, please let us know how much is available so we can try reproducing the issue here at our office.
    Best regards,
    Devin Kendig
    Developer

    Chaos Group

    Comment


    • #3
      understands that a huge frame buffer expends a system memory in large quantities.
      but, Most memories are not expended when VRayVFB freezes. (It is around 4GB my system)

      please let us know how much is available so we can try reproducing the issue here at our office.
      Please set up a simple scene. (one plane only, no Raw out, no lights, no GI, no render elements)

      Maybe, 16000 x 11184 can do a rendering.
      Can a rendering be done in 16000 x 11185 ?

      OakCorp Japan - Yuji Yamauchi
      oakcorp.net
      v-ray.jp

      Comment


      • #4
        Devin, could it be possible that the freeze can be avoided, that a warning will be displayed or that the process can be stopped, since only a few hundred MB are used during the freeze?

        I did a test: ratio limited to 800x600 and than changed the resolution. I opened a fresh Rhino file and draw a cube and render this.

        15000x11250 -> 4,4GB RAM

        alpha channel disabled
        15000x11250 -> 2,4GB RAM

        Now I set 15100, 15200, 15300, 15400 and at 15500 Rhino freeze. Devin, could the team check, why it freeze by this little higher resolution and not the RAM usage increase only? From 15000 to 15400 the RAM usage was approx. 2.4GB only. Something looks wrong with 15500. Maybe the limit is an other at your machine, but I suppose it's there.
        www.simulacrum.de - visualization for designer and architects

        Comment


        • #5
          I did my math wrong. I was assuming 1 byte per pixel for some reason... I also was forgetting that each channel needs contiguous memory. Here is a more accurate description from Joe, of what's going on:

          Frame buffer number of pixels = 16,000 * 16,000 = 256,000,000
          Each pixel has 3 components ( RGB ) where each component is 4 bytes = 3 * 4 = 12
          Total number of bytes for a single RGB vfb channel = 256,000,000 * 12 = 3,072,000,000

          Turning that from bytes into megabytes we get 3,072,000,000 / 1024 / 1024 = 2,929.68 MB

          So the formula for calculating the amount of ram for a given render channel would be
          memoryMegabytes = width * height * numComponents * 4 / 1024 / 1024
          where numComponents will be 3 for an rgb channel, or 1 for something like alpha


          So that means that each RGB VFB channel would require about 3 gigabytes of RAM. The alpha channel would only be about 1GB since there is only 1 component. So RGB + Alpha would require about 4GB of ram just to store the pixels alone. This does not count other intermediate data structures that are required to handle color mapping and sampling such an enormous amount of pixels.

          On top of that you have all other V-Ray scene information, all other loaded applications, etc. What makes it even harder is that I believe that each VFB render channel is expected to be contiguous in RAM ( meaning a giant 3GB block of free RAM that is not broken up would be required for the rgb channel to be allocated ). If a user had signifcant amounts of RAM they might be able to do this, the contiguous block of RAM is the real killer, because finding a block 3GB wide that is contiguous is not likely unless you have maybe 20 GB of ram ( this is just a guess, but I know an unused block of several GB is generally going to fail unless you have massive amounts of ram )

          The vrimage functionality was designed for just this reason - it offloads all that space to the hard-drive and it only loads portions of the overall image into ram when they are being rendered. This allows us to do a piecemeal render without requiring everything loaded into memory all at the same time.
          I suppose we should do some additional error checking to verify that the render will be able to be viewed in the VFB prior to allowing the user to begin the render, but we currently don't do that. I hope that helps.

          [Edit]
          Sorry for missing your post Micha, I should have refreshed before posting a response.
          Best regards,
          Devin Kendig
          Developer

          Chaos Group

          Comment


          • #6
            I did a test render, or attempted to, at 14000 x 13000. I started the render about 2 hours ago, and it's still trying to open the VFB. I have been keeping an eye on the memory usage of the process during this time, and I have seen it slowly climb from a couple hundred mb, all the way up to 1.5gb. It's still trying to do something, but I need to free up my machine for development and can not wait for this any longer. You are spot on though Micha, we should do some sort of error checking to prevent users from sitting there waiting for V-Ray to find enough contiguous memory for displaying the various channels in the VFB.
            Best regards,
            Devin Kendig
            Developer

            Chaos Group

            Comment


            • #7
              The slow start over 2 hours is the "freeze" we mean.

              Try a lower value and increase in small steps. There is a limit above the frame buffer "freeze", but the limit is at a quite low RAM usage, if alpha channel is disabled.
              At the moment I'm happy that we can test it without there is coded limited, maybe we found a bug.
              www.simulacrum.de - visualization for designer and architects

              Comment


              • #8
                Micha - You have already reported this as a bug in case 7244. The issue was deemed to be related to not having enough contiguous memory on 4/12. We can reopen the case and look in to it further. I think that the warning message is a good idea for whatever the limit for your system actually may be though, if we can find a reliable way to do the pre-check in a platform independent way (this is an issue that should also effect V-Ray for SketchUp in theory, which is available in both Windows and OSX flavors). Another obstacle is that we may have to determine how much estimated memory will be needed for the calculation of the scene, prior to rendering, as this will effect the amount of available contiguous memory that the VFB can use. Either way I will reopen the bug report so that it can be investigated further. Thank you both for your feedback.
                Last edited by dkendig; 03-07-2012, 10:34 AM.
                Best regards,
                Devin Kendig
                Developer

                Chaos Group

                Comment


                • #9
                  case 7244 ... interesting is now, that a little increase of the image output size cause that the RAM usage from previous quite low 2.4GB isn't reached again, the start process is "freezing". I'm curious what the team will find. Thank you.
                  www.simulacrum.de - visualization for designer and architects

                  Comment


                  • #10
                    I did post to the forum of max.
                    There was an answer from Vlado.

                    http://www.chaosgroup.com/forums/vbu...043#post555043

                    Currently we assume that one render element in the VFB (be it RGB, or alpha, or something else) can take up to 2^31 bytes in RAM (2147483648 bytes). 16000x11185 x 12 bytes per pixel gives 2147520000 bytes, which is above that limit (whereas 16000x11184 is 2147328000, which is below). As Svetlozar pointed out, we will look into it, but for the moment, you can simply render directly to a .vrimg file.

                    Best regards,
                    Vlado
                    Maybe, it will solve in the future.
                    Thank you!

                    OakCorp Japan - Yuji Yamauchi
                    oakcorp.net
                    v-ray.jp

                    Comment


                    • #11
                      Hi VfR team,

                      at the other thread I got the suggestion:

                      "a 300 MPx image, likely at 32bpc, is always going to be a pain to work with.
                      If you had max or maya, you could split the alpha out directly.
                      Can you not force the .exr extension in vray for rhino? That would at least save you the file size and disk access pains..."

                      I'm not sure what it means, so my question - will it be a solution for VfR?
                      www.simulacrum.de - visualization for designer and architects

                      Comment


                      • #12
                        you can render directly to an exr or vrimage. If you want multiple channels, use the vrimage.
                        Best regards,
                        Devin Kendig
                        Developer

                        Chaos Group

                        Comment


                        • #13
                          I tested it and ask me, could it not be possible optional to disable the framebuffer if the rendering is written to the exr file? If the framebuffer is opened, the size limit stay and the exr output doesn't help.
                          www.simulacrum.de - visualization for designer and architects

                          Comment

                          Working...
                          X