anyone who saw my previous post regarding my new workstation will know im experimenting with 10Gb infiniband interconnects (point to point beween 2 machines at the moment, if its reliable then ill get a switch. )
i am now regularly maxing out the SSD in each test machine sending over 500MB/sec using the IP over Infiniband protocol. (IPoIB)
my initial tests with DR seemed promising, but ive just tested it with an extremely heavy scene and im getting some odd results.
network monitor shows when preparing to render, i get no more than 0.3% network utilisation, and the host machine finishes all the lighting calculations before the slave kicks in.
now since both machines are pulling gigabytes of proxies and maps from a standard mechanical HDD, id expect some bottlenecks there, but 3MB/sec transfer rate is a bit embarrasing.
now considering that rate, it actually started rendering surprisingly quickly (after about 5-6 mins)
but i dont really understand how the DR system sends and receives data. surely a large part of it would go directly from the memory of the host to the slave and vice versa (for example the buckets being rendered, the imap data etc..) and in the case of that data, id expect to max out my network connection, at least momentarily..?
however the most i saw even after starting the render was 2% usage ( 20 MB /sec) and now its right in amongst the buckets, the network usage is around 0.01%.
basically i have no idea if a) its the whole "infiniband" thing thats causing problems, or the HDD im streaming from (could this be improved? i.e. load the data for the host machine, then send directly from ram to the slaves? they would kick in a lot faster in this case..) or maybe its just the way DR works and ill never see it utilising a fast network connection. in which case a real shame, as i had hoped a fast pipe between the machines would bring me closer to the responsiveness of a dual cpu machine.
i understand DR is designed to avoid network bottlenecks but it would be nice if i could adjust some settings to take advantage of my fat pipe - at those speeds id get just as good a result over my adsl connection.....
i am now regularly maxing out the SSD in each test machine sending over 500MB/sec using the IP over Infiniband protocol. (IPoIB)
my initial tests with DR seemed promising, but ive just tested it with an extremely heavy scene and im getting some odd results.
network monitor shows when preparing to render, i get no more than 0.3% network utilisation, and the host machine finishes all the lighting calculations before the slave kicks in.
now since both machines are pulling gigabytes of proxies and maps from a standard mechanical HDD, id expect some bottlenecks there, but 3MB/sec transfer rate is a bit embarrasing.
now considering that rate, it actually started rendering surprisingly quickly (after about 5-6 mins)
but i dont really understand how the DR system sends and receives data. surely a large part of it would go directly from the memory of the host to the slave and vice versa (for example the buckets being rendered, the imap data etc..) and in the case of that data, id expect to max out my network connection, at least momentarily..?
however the most i saw even after starting the render was 2% usage ( 20 MB /sec) and now its right in amongst the buckets, the network usage is around 0.01%.
basically i have no idea if a) its the whole "infiniband" thing thats causing problems, or the HDD im streaming from (could this be improved? i.e. load the data for the host machine, then send directly from ram to the slaves? they would kick in a lot faster in this case..) or maybe its just the way DR works and ill never see it utilising a fast network connection. in which case a real shame, as i had hoped a fast pipe between the machines would bring me closer to the responsiveness of a dual cpu machine.
i understand DR is designed to avoid network bottlenecks but it would be nice if i could adjust some settings to take advantage of my fat pipe - at those speeds id get just as good a result over my adsl connection.....
Comment