Announcement

Collapse
No announcement yet.

Multi-part DEEP EXRs ???

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Multi-part DEEP EXRs ???

    Can we please get multi-part DEEP EXRs? This single part stuff is so slow to work with in Nuke.

    Was this fixed in VRay 7? Still using 6 in production. Thanks.

  • #2
    If I try to convert the files with the exrtools I get this:

    Code:
    exrmultipart -convert -override 1 -i ChipFG_v14.0020.exr -o ChipFG_v14_multi.0020.exr
    input:
    ChipFG_v14.0020.exr
    output:
    ChipFG_v14_multi.0020.exr
    override:1
    
    -convert input to EXR2 multipart
    
    ERROR:
    Cannot initialize output part "0". Can't build a OutputFile from a type-mismatched part.
    
    ​
    Any thoughts?

    The input file is directly out of VRay6 with these options:
    Click image for larger version

Name:	{7831E1F1-D97C-4672-9BE3-A7861B2FDDED}.png
Views:	120
Size:	38.8 KB
ID:	1224435


    Thanks.

    Comment


    • #3
      Hmmm. It looks like exrmultipart needs all channels to be the same data type, and half and float are considered different types. So perhaps if I rendered to forced 32bit float to begin with...

      Ugh. Don't need that. Maybe I could write to PXR24 initially, and then to ZIPS from the conversion (for Nuke speed, hoping that the low order bits would all be zero at this point from the PXR24, and would compress away).

      Any further thoughts? Thanks.

      Comment


      • #4
        Hmmm... again... Writing all 32bit float data resulted in the same errors when trying to covert with exrmultipart.

        Is it possible to get multi-part DEEP files? Maxon seems to claim the RedShift does it (though I have not actually tied). They also have a nice doc page that explains the samples nicely.

        https://help.maxon.net/c4d/en-us/Con...%2BTopics.html

        Comment


        • #5
          Multi-part Deep EXRs are not yet supported, I'm afraid. Best you submit your improvement idea in the Portal so users can vote on it.
          Aleksandar Hadzhiev | chaos.com
          Chaos Support Representative | contact us

          Comment


          • #6
            I am not sure which parts you'd like to read as multi.
            There is RGBA, and there is Depth.
            They are not of the same kind (one isn't a fixed-per-pixel-sample.) and use specific loaders to be extracted (f.e. nuke expects RGBAD. only.).
            In fact the EXR2 format makes multi-part and deep mutually exclusive.

            CORRECTION: the EXR 2.0 (or 3.0) spec allows for multipart deeps in theory, but the reason why no one writes or reads htem is because the spec expects the alpha and deep to be present *per part*.
            So, f.e.
            Part1: diffuse.RGB + AD
            Part2: beautyRGB + AD

            Now, you could put multiple AoVs per part, but you'd incur the penalty of a AD per part too, on top of those grouped AoVs having to be read together as they share the part.
            Which is why no one writes, or reads, multipart Deeps.

            For perfomance, at most, ensure you are writing the deep as a scanline, to aid reads later and avoid loading it all in RAM.
            If you want the most performance out of your deep, then save the depth without the RGB (however, the normal difference in data amounts between the two makes this quite negligible.).
            In nuke, you'll be then able to recolor it using your standard multipart, RGB-based EXR.
            Click image for larger version  Name:	image.png Views:	0 Size:	232.8 KB ID:	1224464
            Last but not least, the option in C4d is needed otherwise "deep" is grayed out.

            Click image for larger version  Name:	image.png Views:	0 Size:	12.0 KB ID:	1224465
            Having other AoVs in the image while deep is active will not produce a multipart file, but rather this, as the EXR 2.0 format isn't supporting the kind of file:
            Click image for larger version  Name:	image.png Views:	0 Size:	23.5 KB ID:	1224466

            Last edited by ^Lele^; 10-01-2025, 06:27 AM.
            Lele
            Trouble Stirrer in RnD @ Chaos
            ----------------------
            emanuele.lecchi@chaos.com

            Disclaimer:
            The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

            Comment


            • #7
              Interesting. Thanks for the info, guys.

              What I ended up doing was using nuke to take the monolithic single part deep+30 other layers and convert it to one shallow multi part file with all the non-deep data, and one deep exr with only the deep data. This fixed my truly abysmal performance in nuke (esp when net rendering non-localized files). I get 2-4x the performance like this vs. all in one single part file.

              Would be nice is Vray could write the files like that. I know you can disable RGB in deep with Vray, but can you get the rgba+other channels in one multi part file, and the deep in its own file like that? That seems like the ideal given how nuke works with deep totally separately anyway.

              I know I can get deep in one file and separate files for all the other channels by using the two output options, but would really like just two EXRs per frame.

              Open to other solutions.

              Even if the Maxon docs are unclear on the multi-part bit they are nice in their description of the deep options.

              Thanks again.

              Comment


              • #8
                Joelaff I was talking to Vasko (resident format expert.) about this, and he suggested to try out VRST instead.
                You will be able to render any AoV (*not* backtoBeauty. add manually instead.) you'll like (Cryptomatte, f.e.), alongside any deep info (VRST is always deep. no need to check boxes.).
                Once that's done, you can convert out using vrstconvert.exe (bin folder) into an EXR which wil contain both Deep and AoVs.
                That will be loadable in Nuke with a deep loader, or a standard one.

                I have not gone through the performance testing yet, but the file layout is clean without visible data duplication.
                Worth a try, perhaps?

                we are ofc looking into writing the exrs in the same way directly, now.
                Lele
                Trouble Stirrer in RnD @ Chaos
                ----------------------
                emanuele.lecchi@chaos.com

                Disclaimer:
                The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                Comment


                • #9
                  Thanks. I just might try that. I appreciate you looking further into this.

                  How would that differ from the EXRs written directly out of VRay with Deep enabled? Would the files created with vrstconvert.exe be multi-part with the Deep in one part and the Shallow (non-deep data) in the other?

                  I figured out the hard way that Back to Beauty doesn't work with Deep files But I had run into this a long time ago; so I knew that (and I think it's in the docs).


                  Still doing some further testing with what Nuke calls Interleaving (which sounds like putting the bytes for the same line for each layer all together in the file to improve read speed).

                  The Nuke docs say:
                  Code:
                  Interleaving is done between elements within a section (part) of .exr files. Interleaving channels, layers, and views is done when the whole .exr is written out in one large part. This is quicker for writing but it involves more work when reading only part of the data.
                  
                  For performance, you should interleave only those elements which are expected to be read together. If you're only reading some layers in each Read node, then it's best not to interleave the layers.
                  
                  If you expect to read all the layers in each Read node, then you can optimize write performance by bundling everything together. Generally, there are a lot of layers in .exr file so this control can make a big difference to both read and write performance.
                  ​
                  Which makes me think the ideal setting is only to interleave views, and not layers, as most of the time we only use 30-60% of the layers (light selects, aovs, etc), and not every layer. Does this sound right in your experience? ChatGPT seems to think this is the case as well.

                  I wonder if my speed boost came just from separating the Deep from the Shallow data. In that test I had interleaving set to per view (and only have one view-- so essentially off it seems). But I did have separate Deep and Shallow files, and it did make a huge speed difference when net rendering. It had very little effect on the workstation machine also used in Deadline, which was, I assume, using the localized copies of the files, rather than the network.

                  Comment


                  • #10
                    The converted would be able to carry deep data and any other shallow layer, possibly read as multipart.
                    Lele
                    Trouble Stirrer in RnD @ Chaos
                    ----------------------
                    emanuele.lecchi@chaos.com

                    Disclaimer:
                    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                    Comment


                    • #11
                      Originally posted by ^Lele^ View Post
                      The converted would be able to carry deep data and any other shallow layer, possibly read as multipart.
                      I guess I'll have to try it and exrinfo on it to see if it is multi-part. Thanks.

                      Comment


                      • #12
                        Originally posted by Joelaff View Post

                        I guess I'll have to try it and exrinfo on it to see if it is multi-part. Thanks.
                        Yes, i only had a chance to test cursorily still, will work around it more the coming week.
                        However, i'm more interested on your report about it working as expected with production data (for the deeps mostly, pixels, we can all handle easily.), and what performance you see.
                        Anything i'll come up with, synthetic, won't quite be the same.
                        Lele
                        Trouble Stirrer in RnD @ Chaos
                        ----------------------
                        emanuele.lecchi@chaos.com

                        Disclaimer:
                        The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                        Comment

                        Working...
                        X