Originally posted by Joelaff
View Post
The benefits of course scale with the size of the source textures, and become exceptionally good with sources of 8 and 16k in size.
But they will still wipe the floor with loading PNG or JPG files of any size, truth be told (the attached test for 4x8k tiffs loaded in a material inside the material editor shows a 5X speed increase in reading alone. Which translates in an immensely quicker lookdev experience.), and with mips already created, they also fly at rendertime when textures are big enough.
As for any piece-wise read of data, the network should be optimised for the work to be done.
I've had issues in the past with bad IT which thought all we did was stream videos ("See? Full speed for this 1gb file!"), when all we did was load hundreds-of-bytes sized material (xMeshes, in the thousand per scene. Thouse would have to fill the massive buffer IT set up, and would invariably fail to load. ).
When IT knew well the scope of the tasks at hand, and set the network up accordingly, I found no problem dealing with centralised tiled EXRs, xMeshes, or V-Ray proxies, at any scale of production (i.e. 300 slaves, huge environments, and so on.).
Caching helps, but it's *not* mandatory, and most definitely it isn't an issue with the data type in itself, but with the way one configures a LAN. (A truism, but it had to be stated.)
The LAN overhead is utterly negligible compared to a few slaves accessing the same non-pieced data off the same drive (i've seen Isilons cry no more.), so it's a nuanced issue, and should be considered as such.
If one's got enought pipeline that the translation and substitution of .TX files doesn't prove to be an obstacle, I maintain (with ample documental proof) that there are *huge* speed benefits in using .TX (in workflow and at rendertime, to varying degrees), and those only scale bigger with the scale of the company.
Comment