Announcement

Collapse
No announcement yet.

Deep Pixels for Primary Samples

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Deep Pixels for Primary Samples

    Is it possible to safe the Primary Image Samples in Deep Pixels?

    For example I render a Scene with Fixed Rate by 4 Subdivisions. That mean every Pixel will be calculated 16 times.
    But in the saved Image I get only 1 RBG value per pixel. It would be greate to get a Deep Image with 16 RBG Values per Pixels instead.

    This would be a way to render passes like UV, Normal or WorldPosition and still getting real Antialiasing.
    At the moment is the only alteritive to render extremly high resolutions.
    It would also be possible to controll the Antialiasing Interpolation in the Post.

    Is there any way to achiev that in Vray 3.0? I tried a lot but I don't think it's possible at the moment.

  • #2
    VRay supports this, but if I understand properly what you are asking, there is no image format that can store such data...
    V-Ray/PhoenixFD for Maya developer

    Comment


    • #3
      sounds like a great feature to have
      Dmitry Vinnik
      Silhouette Images Inc.
      ShowReel:
      https://www.youtube.com/watch?v=qxSJlvSwAhA
      https://www.linkedin.com/in/dmitry-v...-identity-name

      Comment


      • #4
        Originally posted by Khazmadu View Post
        Is there any way to achiev that in Vray 3.0?
        Yes, you can do that. (You don't really want to do it, but it's a different story.) Switch the deep pixel merge mode to "By Z-depth", and set the deep merge Z-depth threshold to 0.0. In the resulting deep OpenEXR file, you will get one deep point for every single image sample that V-Ray takes.

        This would be a way to render passes like UV, Normal or WorldPosition and still getting real Antialiasing. At the moment is the only alteritive to render extremly high resolutions.
        No, this is not the only alternative. Deep images (deep OpenEXR) is designed exactly for that purpose. They don't require you to have every single separate image sample though - they work very well even if nearby deep points are merged into one deep point.

        Best regards,
        Vlado
        I only act like I know everything, Rogers.

        Comment


        • #5
          Originally posted by Morbid Angel View Post
          sounds like a great feature to have
          No, it's a terrible idea, and impossible one too - the 16 samples that V-Ray takes are not neatly arranged in a rectangular grid so they cannot be written into a rectangular pixel array.

          There are reasons to look into the possibility of making V-Ray render at larger resolution internally but getting good AA for UV, Normal or WorldPosition passes is not one of those reasons. Deep OpenEXR images work just fine for that.

          Best regards,
          Vlado
          I only act like I know everything, Rogers.

          Comment


          • #6
            Yes, you can do that. (You don't really want to do it, but it's a different story.) Switch the deep pixel merge mode to "By Z-depth", and set the deep merge Z-depth threshold to 0.0. In the resulting deep OpenEXR file, you will get one deep point for every single image sample that V-Ray takes.
            Thanks I will try that.

            No, this is not the only alternative. Deep images (deep OpenEXR) is designed exactly for that purpose. They don't require you to have every single separate image sample though - they work very well even if nearby deep points are merged into one deep point.

            Best regards,
            Vlado
            How is that possible? In that case the deep image has only 1 Sample per pixel. So how should I use it to get the Subsampling Information out of that.
            Where is the difference between a Merged Deep Image and a normal Non Deep rendering? I tought the only difference is that the Deep image also has Depth coordinates.

            Comment


            • #7
              The deep data contains multiple fragments with different depths. It doesn't matter if the data is generated for transparent rays or are arbitrary subsamples in the same pixel. What you don't have is information where exactly the samples are, so you can't use them for antialiasing. Like I said, I don't think there is an image format for this, since it will be huge and slow to handle. It can be done of course, but someone must prove that it's worth the cost
              V-Ray/PhoenixFD for Maya developer

              Comment


              • #8
                Originally posted by Khazmadu View Post
                I thought the only difference is that the Deep image also has Depth coordinates.
                No, this is not the only difference at all.

                Where is the difference between a Merged Deep Image and a normal Non Deep rendering?
                There are many explanations around the web. In short words, like Ivaylo mentioned, deep images store more than 1 samples per pixel.

                Best regards,
                Vlado
                Last edited by vlado; 13-05-2014, 03:29 AM.
                I only act like I know everything, Rogers.

                Comment


                • #9
                  Originally posted by ivaylo.ivanov View Post
                  Like I said, I don't think there is an image format for this
                  There is - RPF. It allows you to store the "shape" of the fragments (a 16-bit mask that describes the portions of the pixel covered by the fragment). Unfortunately, almost no-one reads this file format and almost no-one generates it.

                  Best regards,
                  Vlado
                  I only act like I know everything, Rogers.

                  Comment

                  Working...
                  X