Announcement

Collapse
No announcement yet.

Just curious How big is your largest render ever? In resolution not number of frames.

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Just curious How big is your largest render ever? In resolution not number of frames.

    I was just curious what the largest render anyone here has ever attempted in V-ray. I'm doing a test render at about 16Kx4K right now but if it goes well I will be attempting a 33600x8550 pixel render for the final mural.

    so what was your biggest render? And how many machines/cores did you use? Also how long did it take?

  • #2
    You might find the following thread useful

    http://forums.chaosgroup.com/showthr...mum+resolution

    Originally posted by vlado View Post
    Currently we assume that one render element in the VFB (be it RGB, or alpha, or something else) can take up to 2^31 bytes in RAM (2147483648 bytes). 16000x11185 x 12 bytes per pixel gives 2147520000 bytes, which is above that limit (whereas 16000x11184 is 2147328000, which is below). As Svetlozar pointed out, we will look into it, but for the moment, you can simply render directly to a .vrimg file.

    Best regards,
    Vlado
    Svetlozar Draganov | Senior Manager 3D Support | contact us
    Chaos & Enscape & Cylindo are now one!

    Comment


    • #3
      17000x15000 4 years a go or so.
      CGI - Freelancer - Available for work

      www.dariuszmakowski.com - come and look

      Comment


      • #4
        22k x 15k. I tiled it so I don't think it counts as it wasn't a single render, it was split into 4 and was relatively quick.

        Retouching is more of a problem than rendering when working at this resolution.

        Comment


        • #5
          In the thread you directed me to you say:

          Originally posted by svetlozar_draganov View Post
          Hello,

          It is tested and reproduced here :
          16000x11184 - was rendered fine
          16000x11185 - crash

          We have added this issue to our to do list.
          For rendering bigger resolution it is better to use "Render to V-ray raw image file" (vrimg) with Generate Preview ON and Render to memory frame buffer OFF.
          Although in VRfM these appear to be different. are you referring to the memory frame buffer feature? and should it be set to none, full, or preview. And should I just try to render Vrimg via batch mode?

          Comment


          • #6
            it would be nice to have some way to see the progress.

            Also Spec wise all systems are 32 Core with 60gb ram and there are 10 slaves and a master of the same spec not rendering.

            Comment


            • #7
              You have to use "preview" or "none" for larger images. The "16000x11184" is a limitation of the VFB.
              V-Ray developer

              Comment


              • #8
                ok exactly what is the point of the buffer size parameter in the vrimg2exe gui? it doesn't seem to have any effect. on a machine with 60GB of ram. Every time I start the program it runs right up to 57GB and then runs for a bit then crashes. I have tried sever values starting at 20GB and working my way to 2GB and every time memory usage goes to 57GB runs for a bit then crashes. Also how is this thing not multi threaded by now?!?
                Last edited by 3Dmotif; 26-03-2014, 11:39 AM.

                Comment


                • #9
                  Hm, it must limit the use memory. The default is 10mb. Does it work if you use the default?

                  Can you tells us some things:
                  Which build are you using?
                  How large is your image? How large are your buckets?
                  What command are you running?
                  Is there a simple steps we can use to reproduce the problem?
                  V-Ray developer

                  Comment


                  • #10
                    using the GUI that pops up now.

                    build is V-ray_adv_24501_maya2014_x64_24274

                    the Vrimg file is a little of 6GB the buckets were rendered at 64x and gave no error on completion in batch mode. but I tried to run in maya with VRFB set to preview and it crashed and the file was the same size. so perhaps there is an issue with the vrimg file saved. Re-rendering now. I also knocked out a bunch of buffers that I didn't really need.

                    Comment


                    • #11
                      Just a thought but if vrimg2exr is going to be such a crucial part of the V-ray work flow it seriously needs some attention. it never really uses more than 1% of the CPU and never peaks more than one core at a time. a utility as important as this should be seriously optimized for multithreading.

                      Comment


                      • #12
                        As far as I know the bottle neck for this tool is the IO performance, not CPU, thus it has not been made multithreaded.
                        I'll talk to a colleague that is working on it to see what he thinks about performance improvements.
                        V-Ray developer

                        Comment


                        • #13
                          I've misled you in my previous post. My colleague is doing major improvements to vrimg2exr at the moment, which will be available in a future version. We'll notify you when you can test it.

                          A note about the buffer option: it controls the memory per channel, and not total memory, so if you have many channels the total used memory for buffering would be the value you've pass multiplied by the number of channels.
                          V-Ray developer

                          Comment


                          • #14
                            I re-rendered it with only 3 channel, RGB, Spec, And self Illumination. It took less than 1 hour to render and I began conversion. at 4:30pm yesterday it is now 10:12 am the following day and it is at only 77% done with the rob channel. At this rate if I render the 33K image it is gonna take over a week to convert it. I don't see how this is an acceptable performance under any circumstances. right now I'm running with -bufsize 4096 would increasing this size help with the speed? I mean honestly translating it to a text based format would prolly be faster than this. and I highly doubt this to be an io issue seen how I can push many times this amount of data per second.

                            Comment


                            • #15
                              As I've said, I misled you in my previous post and it is not an IO issue most of the times, but a combination or just inefficiencies.

                              Can you tell us what is the exact command you're using?
                              Also can you share a simple scene which could be used to reproduce the problem on our side?
                              A guides to setup one would be welcome, too. (bucket size, bucket order, resolution, image settings, etc).

                              Also can you post the exact build number you're using (it is printed in the output window after v-ray is started)?
                              V-Ray developer

                              Comment

                              Working...
                              X