If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
Exciting News: Chaos acquires EvolveLAB = AI-Powered Design.
To learn more, please visit this page!
New! You can now log in to the forums with your chaos.com account as well as your forum account.
It doesn't support Deep image right now. Seems it has OFX API, all the docs and examples are in Composite/ofx
Anyway I dunno if autodesk have first to give Toxik deep image support, and maybe even ofx access to it ?
Nope; we are still in the research phase for this and the current implementation is not very user-friendly. We should be able to include it in the spring service pack though.
Nope; we are still in the research phase for this and the current implementation is not very user-friendly. We should be able to include it in the spring service pack though.
Best regards,
Vlado
Hi Vlado, is there a update on the progress of the plugin?
Yes, in fact that part is done now. It won't make it into the official service pack (too many changes), but you can email me to vlado@chaosgroup.com if you want to try it out.
Working fine with maya and nuke here. Just a quick question Vlado.
Is there any way to wright only deep data into the .vrst file? I dont want it to include other passes like normal rgb or GI. I want the rgb to be like an alpha channel and to have a deep channel, nothing more.
That way I could combine an .exr with a deepread into a deeprecolor.
This will not be true deep data tho. Unless you specifically want z as deep only. But the idea is to have the corresponding RGB fragments for each of the z fragment. If you do not have them a deepmerge or deep supporting DOF will not be able to fight artifacts as expected.
This is somewhat different (=less) than what you get out of V-Ray. The prman method, as described on that page, generates deep data, but not sub-pixel data. This makes it impossible, for example, to separate the contribution of two objects, as colors along the edges where the objects overlap cannot be separated out of the RGB channel. So you wouldn't be able to apply color correction on just one object, for example. V-Ray on the other hand, generates separate fragments for each of the objects, complete with all the render elements for each object alond with the deep data. This gives a lot of freedom to do stuff that would otherwise require the generation of separate masks or holdouts for each object.
Comment