Announcement

Collapse
No announcement yet.

How to filter the rendering, but without anti-aliasing?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • How to filter the rendering, but without anti-aliasing?

    I wasn't sure how to name the thread, so let me explain what I want to achieve:

    Google has open-sourced a technology called Seurat, which allows baking a "volume" of pre-rendered 360 degrees panoramas (plus their depth buffers) to a low-poly mesh. This allows viewing high-quality rendered 3D scenes on even mobile devices (especially VR headsets), and even a limited amount of moving around in the scene (hence rendering a "volume") with 6 degrees-of-freedom VR headsets.

    Now, the Seurat technology requires the images used for the calculation to be rendered without anti-aliasing, so basically setting V-Ray's sampler subdivs to 1. The problem with this is, of course, that rendering this way creates an extremely noisy image.

    Is there a good way to render an image that gets the proper amount of subdivions to render noise free, and still have no anti-aliasing (so that each pixel of the rendering clearly has a corresponding pixel in the unfiltered Z-pass)?

  • #2
    turn off filtering, leave the sampling as usual.
    Marcin Piotrowski
    youtube

    Comment


    • #3
      That's very interesting! But they don't have a generation script/plugin for max right? I see only a Maya script.
      German guy, sorry for my English.

      Comment


      • #4
        Originally posted by piotrus3333 View Post
        turn off filtering, leave the sampling as usual.
        I guess I shouldn't have used the term "anti aliasing", as this has nothing to do with V-Ray's image filter, but after all that's what it's called. I want an image that's basically "aliased", i.e. no anti-aliasing between pixels at all, but still have the light and reflection samples in each pixel be more than just 1.

        I've found a way, which is not ideal, and that's setting Min subdivs to 1, disabling Max subdivs (at this point this would lead to a very noisy image), but then activating "use local subdivs". While this does give me basically what I want (clean light/shadow, reflection/refraction subdivs with no anti-aliasing between pixels) it's not ideal, since it uses the same number of subdivs all across the image, doesn't it? Ideally I'd still want to use the speed increase of adaptive sampling... any way to do that?

        Originally posted by Ihno View Post
        That's very interesting! But they don't have a generation script/plugin for max right? I see only a Maya script.
        Here's a script for Max: https://github.com/superrune/3dsmaxSeuratExport

        Also here's a video showing the workflow of Seurat in Unity, which gives an idea on how the resulting mesh and texture look: https://www.youtube.com/watch?v=FTI_79f02Lg&t=1s
        Last edited by Laserschwert; 10-05-2019, 04:03 AM.

        Comment


        • #5
          You are still leveraging some adaptivity by using the old approach and setting up manually each light and shader, rather than using MSR, as you did.
          It will still adapt (via the global DMC sampler), but it will patently not give you a good image, regardless: sub-pixel geometric detail will be missed.
          Lele
          Trouble Stirrer in RnD @ Chaos
          ----------------------
          emanuele.lecchi@chaos.com

          Disclaimer:
          The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

          Comment


          • #6
            Originally posted by ^Lele^ View Post
            You are still leveraging some adaptivity by using the old approach and setting up manually each light and shader, rather than using MSR, as you did.
            It will still adapt (via the global DMC sampler), but it will patently not give you a good image, regardless: sub-pixel geometric detail will be missed.
            Ah, good to know that this is still adaptive (I forgot about the DMC settings, right). Sure, sub-pixel detail is getting lost, but that's the caveat with this tech. Just like each pixel in a Z-buffer needs to be exclusive (and not a blend of its surroundings), in this case each pixel in the final rendering needs to be as well.

            I think ultimately it might be "easier" (if more work) to bake everything down to a realtime scene (for Unity or Unreal) and create the panoramas for Seurat in there. The advantage of that would be that you get proper filtering across surfaces, but no anti-aliasing (if turned off) between objects or different depths.

            Comment

            Working...
            X