Announcement

Collapse
No announcement yet.

Could not allocate device buffer with size...

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Could not allocate device buffer with size...

    Wonder if anyone can shed some light on this issue. More and more often I'm getting the above error - this time with an extremely simple 30Mb file containing 2 or 3 materials and almost nothing of any complexity like furniture or models. Lighting is just a few rectangular lights - no dome lights or arrays of dozens of IES files, etc. It's a building exterior with plain, unsmoothed walls and some glazing. That's about it. The weird thing is that the issue is intermittent: I move the camera around a little and the error doesn't occur. It seems random.

    The error says:

    [VUtils::SimpleGPUBuffer::init] 2: Could not allocate buffer of size 8347MB!
    [MemoryManagerGpu::setupManagedMemoryInstance] Could not allocate device buffer with size: 2170697984 bytes

    This is with RTX rendering using V-Ray GPU (BF and LC for GI).
    I have 64GB of system RAM and an RTX 3080Ti with 12gb GDDR6X memory

    Thoughts? Is V-Ray just getting worse at handling simple files? Or have I perhaps somehow introduced something odd into the file that doesn't contribute to file size (30Mb is piddlingly small) but cripples V-Ray's ability to process it?

    Also, if my GPU has 12Gb memory, so what if my file needs 8.3Gb of that? I shouldn't get this error until I'm needing 12Gb GPU ram surely?
    Also, why does V-Ray need to load the entire scene into the GPU's memory in one go? If it needs 40Gb GPU memory, why doesn't it optimise the file and process it in chunks: 5gb first, then another 5gb, etc? Seems odd to have a limitation based on GPU memory when most folks would happily accept a longer render time to allow older GPUs to incrementally render complex scenes - I know I would. And why not give us users the choice?


    P.S. I have no issues like this with other rendering software or the Vectorworks and Revit files I work with everyday which are vastly more complex than this shoebox model.

    Thanks for any help.
    Andy
    Attached Files

  • #2
    Hello andy_smith,

    Could not allocate buffer of size 8347MB!
    The error indicates that the current scene requires additional 8.3 GB.

    Could I please request to share the file with us to test?
    If this is alright please open a ticket using this submit request form and upload the archived scene in the ticket. Please make sure to archive it with the Pack Project tool (Extensions > V-Ray > Pack Project tool).

    Apart from that please share:
    • A screenshot of About window V-Ray for SketchUp to confirm the exact build you are using.
    • What is the current GPU driver installed? Please note that the latest recommended version is 531.41, if you have different drivers installed - please install 531.41 version.
    I can see from your screenshot that you have LightMix added and I wanted to point out that LightMix as well as other render elements requires memory. In the case of LightMix if you have it set to Group by Individual lights try changing it to Group Instances. Additionally you are welcome to check this article for more information about GPU optimizations that may be of help.

    Also, why does V-Ray need to load the entire scene into the GPU's memory in one go? If it needs 40Gb GPU memory, why doesn't it optimize the file and process it in chunks: 5gb first, then another 5gb, etc? Seems odd to have a limitation based on GPU memory when most folks would happily accept a longer render time to allow older GPUs to incrementally render complex scenes - I know I would. And why not give us users the choice?
    As for this question - there is an experimental option under V-Ray GPU that enables loading geometry and textures on demand (Out of Core (WIP)). This slows down rendering as expected but V-Ray GPU will be able to use system memory to offload some of the data on demand (in V-Ray for SketchUp this option is available in CUDA mode). In general using CUDA mode will save GPU memory compared to RTX mode.

    Also, use progressive sampler mode in V-Ray GPU, which uses less VRAM overall, and make use of V-Ray's smart sampling and scene adaptivity.

    Hope it helps.

    Click image for larger version

Name:	V-Ray_About_SU.jpg
Views:	1298
Size:	285.1 KB
ID:	1180833Click image for larger version

Name:	LightMix.jpg
Views:	1319
Size:	57.5 KB
ID:	1180832 Click image for larger version

Name:	CUDA_out-of-core.jpg
Views:	1287
Size:	100.9 KB
ID:	1180834
    Natalia Gruzdova | chaos.com
    Chaos Support Representative | contact us

    Comment


    • #3
      GPU driver is 531.41
      Build Details attached.
      Project packed and uploaded.

      Thanks,
      Andy
      Attached Files

      Comment


      • #4
        Thank you andy_smith for sending your scene in a ticket, we will continue communication there for now.
        Natalia Gruzdova | chaos.com
        Chaos Support Representative | contact us

        Comment


        • #5
          I wanted to share that in this particular scene the issue was caused by the Displacement, which was applied to very small geometry with many faces and details.
          This resulted in very long Compiling geometry times at the start of the render (with both CPU and GPU render engines) and consequently required a lot of memory to be rendered.

          Disabling this Displacement solved the issue in this scene.

          Tip: Displacement can be disabled globally in the V-Ray Asset Editor > Switches > Displacement. It can be useful for test purposes.

          Natalia Gruzdova | chaos.com
          Chaos Support Representative | contact us

          Comment

          Working...
          X