Sure,
So just to have a general idea :
We have a farm of 50 nodes with 32 Go of ram.
We are working on a ongoing project for years and the assets are bigger and bigger every months.
We are now at a stage where when we render, the render takes between 29 and 31 Go of ram.
So for the actual render part :
We need to render high ress images of 8000x6000 px often, with loads of multimatte to tweak in post.
The Vray VFB takes 29 Go of ram with that 8000x6000 px with all the passes.
Like you can imagine, 30Go of ram for render, and 29 Go of ram for the VFB, it doesn't fit in the nodes where we send the render to.
I know it doesn't use the ram of the VFB for the spawners, but only for the Master Max rendering, but we send the render through BB and it renders on nodes that has only 32Go and not 64Go.
So my trick was to save out to VRimg, and load that VRimg in a fresh max scene (through the vray history frame buffer).
That means that I break up the ram needed in 2 different stages (render | saving), and each of this stage fits in the ram that we have (32Go).
After that, I use the multi save button to save out every passe to exr...
When we have so big images and project that use both so much ram, I rather export to vrimg just in case, even if max crash, at least we recover part of the render and we can then render again only what missed if we are on tight deadlines (if not I just resend to whole thing).
In any case, that's the only way I found to work efficiently and be able to handle so big jobs, if there is another way to speed this workflow up, I will be happy to hear it
So if the multithread has been implemented only at rendertime, yes then I understand why I don't see any improuvements
Thanks
Stan
So just to have a general idea :
We have a farm of 50 nodes with 32 Go of ram.
We are working on a ongoing project for years and the assets are bigger and bigger every months.
We are now at a stage where when we render, the render takes between 29 and 31 Go of ram.
So for the actual render part :
We need to render high ress images of 8000x6000 px often, with loads of multimatte to tweak in post.
The Vray VFB takes 29 Go of ram with that 8000x6000 px with all the passes.
Like you can imagine, 30Go of ram for render, and 29 Go of ram for the VFB, it doesn't fit in the nodes where we send the render to.
I know it doesn't use the ram of the VFB for the spawners, but only for the Master Max rendering, but we send the render through BB and it renders on nodes that has only 32Go and not 64Go.
So my trick was to save out to VRimg, and load that VRimg in a fresh max scene (through the vray history frame buffer).
That means that I break up the ram needed in 2 different stages (render | saving), and each of this stage fits in the ram that we have (32Go).
After that, I use the multi save button to save out every passe to exr...
When we have so big images and project that use both so much ram, I rather export to vrimg just in case, even if max crash, at least we recover part of the render and we can then render again only what missed if we are on tight deadlines (if not I just resend to whole thing).
In any case, that's the only way I found to work efficiently and be able to handle so big jobs, if there is another way to speed this workflow up, I will be happy to hear it
So if the multithread has been implemented only at rendertime, yes then I understand why I don't see any improuvements
Thanks
Stan
Comment