Announcement

Collapse
No announcement yet.

lightcache and memory usage

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • lightcache and memory usage

    Is it correct that lightcache uses more memory when the sample size decreases? Is this the same for progressive path tracing? I tried to make a progressive path tracing rendering with 0 for sample size, but then memory usage increases and increases until I have to cancel the rendering because it starts using the hard drive.
    Any thoughts on this?
    You can contact StudioGijs for 3D visualization and 3D modeling related services and on-site training.

  • #2
    yea, it is, if you render just with the lightcache, and view each of the 'polygon' shaped artifacts as being 1 sample, when you decrease the sample size, it needs to use more samples..
    Dave Buchhofer. // Vsaiwrk

    Comment


    • #3
      you have to give the samples a size .... surely 0 means no samples ...but vray will just eat ya ram until POP ..
      Natty
      http://www.rendertime.co.uk

      Comment


      • #4
        When the sample size is 0.0 V-Ray should not create any samples for the light cache at all. If it does, then it's probably a bug. Are you sure that the sample size is really 0.0 or is it just a very small number (it may be truncated when displayed depending on your spinner precision).

        Best regards,
        Vlado
        I only act like I know everything, Rogers.

        Comment


        • #5
          Actually I did not say that I was using 0 for lightcache as GI method, but for progressive path tracing. In the manual it says 0.0 will create an unbiased solution. But is there a rule of thumb with regard to memory usage and sample size for lightcache GI and /or progressive path tracing?
          You can contact StudioGijs for 3D visualization and 3D modeling related services and on-site training.

          Comment


          • #6
            Originally posted by Gijs
            Actually I did not say that I was using 0 for lightcache as GI method, but for progressive path tracing.
            Yep, this is what I meant too. It's the same thing, basically.
            In the manual it says 0.0 will create an unbiased solution.
            That's correct. And in that case, V-Ray should not actually store any samples in memory - everything will be computed from scratch for each light bounce. I will check this just in case something went off in the build you have.
            But is there a rule of thumb with regard to memory usage and sample size for lightcache GI and /or progressive path tracing?
            Not really; similar to photon mapping, there is no way of knowing how many samples you will get in the end. Increasing the sample size reduces memory usage and vice versa. Exception is that sample size of 0.0 does not use any memory at all for the light cache.

            Best regards,
            Vlado
            I only act like I know everything, Rogers.

            Comment


            • #7
              so if its not save in memory then it means you cant save an unbiased solution? meaning maxwell cant save their lighting solutions to distribute over a network? hmmm

              hehe i like vlados signature "The bug hunt continues... If you find a bug - report it." i expect to see an episode of the crocodile hunter saying "today we are hunting bugs and krikey this bloke here knows how to squish bugs"

              ---------------------------------------------------
              MSN addresses are not for newbies or warez users to contact the pros and bug them with
              stupid questions the forum can answer.

              Comment


              • #8
                Originally posted by Da_elf
                so if its not save in memory then it means you cant save an unbiased solution? meaning maxwell cant save their lighting solutions to distribute over a network? hmmm
                Not exactly. You can save the final images from the different machines and then blend them together to reduce the noise. You can do that even with V-Ray, but there is a small trick to that - you need to uncheck the "Time-independent" option for the QMC sampler and then render different frames on different machines - just like an animation with Backburner. Then you can blend all the images together for the final result. This is more or less what Maxwell does for its cooperative rendering.

                Best regards,
                Vlado
                I only act like I know everything, Rogers.

                Comment


                • #9
                  I ran some tests again, but can't reproduce it.

                  So if I understand things correctly, when having problems with memory and lightcache, either increase the sample size, or use PPT with sample size 0 right?
                  You can contact StudioGijs for 3D visualization and 3D modeling related services and on-site training.

                  Comment


                  • #10
                    O, and while I'am at it, this also means that if you want to save the PPT lightcache for future uses, and sample size is 0, you should check save to disk, in contrast with normal lightcache where you have the option to save it after rendering?
                    You can contact StudioGijs for 3D visualization and 3D modeling related services and on-site training.

                    Comment

                    Working...
                    X