This is the first time I've ever had a chance to use a slave machine in rendering. Here's the situation...
I followed all of the instructions in the VRay manual in setting up the DrSpawner on my primary machine and on my company office's main server which is the most powerful computer we have as a small company. Here is the rundown of a small test render I did from the VRay Render Progress dialog:
Beginning render sequence
Preparing renderer...
Preparing scene for rendering.
[RenderView] startCameraTime=0.000000, endCameraTime=0.033333
[RenderView] numCameraTMs=2, numFrames=1, frameSamples=2
Initializing built-in VFB.
[RenderView] startCameraTime=0.000000, endCameraTime=0.033333
[RenderView] numCameraTMs=2, numFrames=1, frameSamples=2
Starting DR
Connected to render host (I.P. address)
Using 1 hosts for distributed rendering.
Subpixel color mapping is on: rendered result may have incorrect brightness.
Preparing camera sampler.
Preparing scene for frame.
Compiling geometry.
Preparing ray server.
Scene transferred to (I.P. address)
Server (I.P. address): Scene loaded; starting render.
Creating and initializing 2 thread(s).
Allocating memory for build data of 12588 faces (352464 bytes, 0.3 MB).
Initializing face build data.
Creating 'done' event.
Starting first thread.
Waiting for thread completion.
Releasing thread resources.
Server (I.P. address): Render completed.
Preparing faces for intersection tests.
SDTree statistics:
Total number of faces stored: 12588
Max tree depth: 48
Average tree depth: 17.0445
Number of tree nodes: 3085
Number of tree faces: 27832
Number of tree leafs: 1192
Average faces/leaf: 23.349
Memory usage: 1.30 MB
Scene bounding box is [-24,-20.25,0]-[24.25,22.5,16.25]
Preparing direct light manager.
Preparing global light manager.
Irradiance sample size is 84 bytes
Photon size is 56 bytes.
Light cache sample size is 120 bytes.
1 interpolation maps registered
Rendering interpolation maps with minRate=-4 and maxRate=-1
Setting up 2 thread(s)
Bitmap file "X:\Morgan\Rhino files\ASGVIS\Materials\HDRI\Aversis App_living.hdr" loaded.
Threads completed
Calling endPass() on irradiance maps
Calling prePassDone() on irradiance maps
Image prepass completed.
Rendering image.
Setting up 2 thread(s)
Threads completed
Waiting for DR to finish
Number of raycasts: 3561810
Clearing global light manager.
Clearing direct light manager.
Clearing ray server.
Clearing geometry.
Clearing camera image sampler.
Clearing camera sampler.
Clearing DMC sampler.
Clearing path sampler.
Clearing color mapper.
Closing DR
Render completed
With my very limited knowledge of all of this it still seems like it was successful in distributing and hence dropping the time it takes for a render. However, it actually increased the amount of time by approximately 4 seconds vs just rendering locally on my one machine. Any ideas? I'm completely self taught (aside from all the fantastic webinars) in vray and rendering so I'm sure there is something very simple I'm missing. Any help anyone could afford would be greatly appreciated. Thanks.
I followed all of the instructions in the VRay manual in setting up the DrSpawner on my primary machine and on my company office's main server which is the most powerful computer we have as a small company. Here is the rundown of a small test render I did from the VRay Render Progress dialog:
Beginning render sequence
Preparing renderer...
Preparing scene for rendering.
[RenderView] startCameraTime=0.000000, endCameraTime=0.033333
[RenderView] numCameraTMs=2, numFrames=1, frameSamples=2
Initializing built-in VFB.
[RenderView] startCameraTime=0.000000, endCameraTime=0.033333
[RenderView] numCameraTMs=2, numFrames=1, frameSamples=2
Starting DR
Connected to render host (I.P. address)
Using 1 hosts for distributed rendering.
Subpixel color mapping is on: rendered result may have incorrect brightness.
Preparing camera sampler.
Preparing scene for frame.
Compiling geometry.
Preparing ray server.
Scene transferred to (I.P. address)
Server (I.P. address): Scene loaded; starting render.
Creating and initializing 2 thread(s).
Allocating memory for build data of 12588 faces (352464 bytes, 0.3 MB).
Initializing face build data.
Creating 'done' event.
Starting first thread.
Waiting for thread completion.
Releasing thread resources.
Server (I.P. address): Render completed.
Preparing faces for intersection tests.
SDTree statistics:
Total number of faces stored: 12588
Max tree depth: 48
Average tree depth: 17.0445
Number of tree nodes: 3085
Number of tree faces: 27832
Number of tree leafs: 1192
Average faces/leaf: 23.349
Memory usage: 1.30 MB
Scene bounding box is [-24,-20.25,0]-[24.25,22.5,16.25]
Preparing direct light manager.
Preparing global light manager.
Irradiance sample size is 84 bytes
Photon size is 56 bytes.
Light cache sample size is 120 bytes.
1 interpolation maps registered
Rendering interpolation maps with minRate=-4 and maxRate=-1
Setting up 2 thread(s)
Bitmap file "X:\Morgan\Rhino files\ASGVIS\Materials\HDRI\Aversis App_living.hdr" loaded.
Threads completed
Calling endPass() on irradiance maps
Calling prePassDone() on irradiance maps
Image prepass completed.
Rendering image.
Setting up 2 thread(s)
Threads completed
Waiting for DR to finish
Number of raycasts: 3561810
Clearing global light manager.
Clearing direct light manager.
Clearing ray server.
Clearing geometry.
Clearing camera image sampler.
Clearing camera sampler.
Clearing DMC sampler.
Clearing path sampler.
Clearing color mapper.
Closing DR
Render completed
With my very limited knowledge of all of this it still seems like it was successful in distributing and hence dropping the time it takes for a render. However, it actually increased the amount of time by approximately 4 seconds vs just rendering locally on my one machine. Any ideas? I'm completely self taught (aside from all the fantastic webinars) in vray and rendering so I'm sure there is something very simple I'm missing. Any help anyone could afford would be greatly appreciated. Thanks.
Comment