Announcement

Collapse
No announcement yet.

Changing 3ds max's bitmap bluring default...Vray's AA

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Originally posted by Macker View Post
    Because surely it makes more sense to have the pixel information there, and let the image sampler blur it as necessary rather than simply be presented with a blurry image for the sake of giving the image sampler an easier time?

    What I find absolutely baffling is why people do the opposite. They allow these blurry images to be rendered out, only to run a razor sharp Mitchell Netravali filter over them. Madness I tell you.

    It's the equivalent of smearing vaseline over your camera lens then running a sharpen filter in photoshop after to fix it.
    I wholeheartedly agree.
    In NONE of the shows where i have been a TD the rendering is filtered by default.
    In case of a noisy image, it splatters the noise across pixels, making it comper's hell to try and remove it.
    In case of a clean enough image, it still creates endless issues with alpha masks, and lowers (i'm talking of blurry, entirely-positive component filters.) image detail frequency (which required loads of work to put in in the first place).
    There are, however, cases where GEOMETRY is very thin, and tends to shimmer no matter the AA sampling, very much like it would in real life (everything has a set resolution. a camera, the eye, you name it.), and then, and only then, after talking to the Comp Sup, and the relevant comper for the shot, filtering is introduced, of the blurry/reconstructive type, and this purely because at rendertime we can do so sub-pixel, and with greater accuracy than post could.
    Further, giving Post a clean, sharp image (not. ever. they rebuild the beauty pass from REs.) to work with means all the blurring they will necessarily add (ie. film grain) will be done on the exact rendered pixels, and not on a mushy mess.
    And this is for when blurring, positive component filters are used.
    It's of a couple of days ago a good laugh with someone which was bit**ing about artists feeding him NEGATIVE RGB beauty passes to fix: in comes Lanczos/Catmull-Rom around bright areas (Canon's DSLRs in around 2009 had it by default on their captured raw).

    Of course, this is for a full 3d -> post pipeline, and i have the impression "post" as I am intending it here is not done at all, by most of the readers of this thread.
    I guess it's always, ultimately, a case of what floats one's boat.
    If it's a transatlantic one has to steer, however, small mistakes pile up very very quickly, and in no time at all, one'll see the writing "Titanic" on each and every wall.

    As for filtering's impact on render times,it's dependent from the filter kernel type and size, image resolution and, because of how filtering at rendertime works, the actual shading complexity (as it has to shade all the pixels within the kernel, before being able to do the filter maths on them and finally return the single pixel actual value.).
    And it IS sizeable and impact, as i'm showing in the next few images:

    Click image for larger version

Name:	nofilter.png
Views:	1
Size:	366.8 KB
ID:	855284 Click image for larger version

Name:	areaFilter.png
Views:	1
Size:	355.2 KB
ID:	855285 Click image for larger version

Name:	cubicFilter.png
Views:	1
Size:	306.8 KB
ID:	855286 Click image for larger version

Name:	catumll-romFilter.png
Views:	1
Size:	393.4 KB
ID:	855287 Click image for larger version

Name:	softenFilter.png
Views:	1
Size:	268.8 KB
ID:	855288

    A word of note: broad, blurry filters (soften, at 6 pixels, f.e.) DO lower noise level due to massive kernel size. Much as you'd obtain Gaussian Blurring your render in post with a size of 6 (which would be a heck of a lot quicker, too.).
    MUCH better to sample properly, and reduce noise that way, as the rendertime headroom is MASSIVE (30 secs, versus 79secs. nearly 2.5X slower.).
    Last edited by ^Lele^; 12-02-2015, 07:03 AM.
    Lele
    Trouble Stirrer in RnD @ Chaos
    ----------------------
    emanuele.lecchi@chaos.com

    Disclaimer:
    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

    Comment


    • #47
      In support to the last note in my previous post, here's a cleaned up, no filter, render.
      Click image for larger version

Name:	nofilterClean.png
Views:	1
Size:	323.5 KB
ID:	855289
      Lele
      Trouble Stirrer in RnD @ Chaos
      ----------------------
      emanuele.lecchi@chaos.com

      Disclaimer:
      The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

      Comment


      • #48
        We use Soften at 3.8-4.2 range. Keeping it below 4 is a little faster.

        Note that post blurring is *quite* different from using an image filter. You will never achieve the quality of an image filter with a post blur. To match the image filter you would have to render at many times your normal size and scale down with a smoothing algorithm. This is why the image filter is way better if you can afford the render time.

        Have a look at these. Once you have a bright highlight you can see why an image filter is critical, and why no filter at all is pretty darned terrible. Note how blurring the no filter does nothing to fix the aliasing on the highlight.

        These were rendered at min 2 | max 100 | threshold 0.006. Same hideous aliasing without an image filter even at 0.001 threshold.

        I get that some compositors like things without image filters. We composite stuff flawlessly with image filters, and it would look far worse without them unless we rendered everything four or eight times normal size. But we also tweak the CG to look damned good to begin with, rather than relying on skewing things a ton in the post. This is the advantage of few-artist workflows vs. the old school departmental approach. The artist knows what is best fixed in rendering and what is best fixed in post.

        Click image for larger version

Name:	Area1.5.jpg
Views:	1
Size:	116.3 KB
ID:	855296
        Area 1.5 (Above)
        Click image for larger version

Name:	Soften4.2.jpg
Views:	1
Size:	108.3 KB
ID:	855297
        Soften 4.2 (Above)
        Click image for larger version

Name:	NoFilter.png
Views:	1
Size:	447.5 KB
ID:	855298
        No Filter (Above) - all aliased to hell - UGLY
        Click image for larger version

Name:	NoFilterBlurred.png
Views:	1
Size:	193.0 KB
ID:	855299
        No Filter but post blurred (above) No amount of blurring will fix the lack of image filter. The source image is trash. This has a fair amount of blur, but still shows aliasing because the post blur cannot fix high contrast stair steps aliasing. Sure you could run an edge detect and blur those areas more, but it would still not look right (I have done that too many times with poor quality source material when people first started shooting digital on cameras like the F900)

        The render time listed on the no filter versions is not right. It was 12.9 sec. This no filter version took longer because it was actually rendered at a threshold of 0.001. Even at that crazy setting no filter cannot fix the aliased highlights.
        Last edited by Joelaff; 12-02-2015, 12:13 PM.

        Comment


        • #49
          Originally posted by Joelaff View Post
          Note that post blurring is *quite* different from using an image filter. You will never achieve the quality of an image filter with a post blur. To match the image filter you would have to render at many times your normal size and scale down with a smoothing algorithm. This is why the image filter is way better if you can afford the render time.
          Yes, as i said:
          There are, however, cases where GEOMETRY is very thin, and tends to shimmer no matter the AA sampling, very much like it would in real life (everything has a set resolution. a camera, the eye, you name it.), and then, and only then [...] filtering is introduced, of the blurry/reconstructive type, and this purely because at rendertime we can do so sub-pixel, and with greater accuracy than post could.
          The sharp edge to your highlight is correct, as it is correct that no AA amount should clean it: it is not the AA's task (nor it could be, if you think at what AA does.).
          That's why compers get render elements, with separate reflection and specular, which they can treat with any filter they please (Glare, or Bloom, comes to mind. not gaussian blur. Try it, it's in the VRay FB.), in the selected areas of super-bright pixels (which, again, is correct, and happens naturally with vision, or film.).
          If i got compers overall blurry sequences with blurry alphas/MultiMattes to boot, and told them i did so to fix the highlights, they'd would not be amused one bit.

          But as i said, if it floats your boat, feel free to filter your renders (well, unless you'll ever happen to be one of the artists for which i TD, that is. Which is something i wouldn't want to happen to my worst enemy... ).

          P.S.:http://forums.chaosgroup.com/showthr...757#post631757
          Last edited by ^Lele^; 12-02-2015, 01:00 PM.
          Lele
          Trouble Stirrer in RnD @ Chaos
          ----------------------
          emanuele.lecchi@chaos.com

          Disclaimer:
          The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

          Comment


          • #50
            Originally posted by ^Lele^ View Post
            Yes, as i said:

            The sharp edge to your highlight is correct, as it is correct that no AA amount should clean it: it is not the AA's task (nor it could be, if you think at what AA does.).
            Correct, yes, but you would never see that in an image photographed (motion or still) with a lens. So it is not realistic for our VFX work.

            Originally posted by ^Lele^ View Post
            That's why compers get render elements, with separate reflection and specular, which they can treat with any filter they please (Glare, or Bloom, comes to mind. not gaussian blur. Try it, it's in the VRay FB.), in the selected areas of super-bright pixels (which, again, is correct, and happens naturally with vision, or film.).
            We do our own advanced glare effects with custom tools in Fusion. I am familiar with the idea, and I do think most renders need some. It still becomes difficult and extra work to fix the highlights, which often tend to crawl without applying what I would consider too much glare.

            Originally posted by ^Lele^ View Post
            If i got compers overall blurry sequences with blurry alphas/MultiMattes to boot, and told them i did so to fix the highlights, they'd would not be amused one bit.
            As a comper myself I would say the opposite. Don't give me aliased trash I have to wrestle with to make look halfway decent Again, this is the advantage of a comper that knows rendering and knows where the best place to tweak something is, and a CGI team that knows comping. Not every problem is best solved in post. We often find ourselves re-rendering things with softer filters to get them closer to the end sharpness level. Everything is comped at at least 4k or 5k and then scaled down to whatever the final delivery is.

            Originally posted by ^Lele^ View Post
            But as i said, if it floats your boat, feel free to filter your renders (well, unless you'll ever happen to be one of the artists for which i TD, that is. Which is something i wouldn't want to happen to my worst enemy... ).
            Yeah, different houses approach things in different ways. We get the best results getting the renders as close as possible to begin with, but not if something is easier/faster in the post. For us, we found sticking with arbitrary rules to make things absurdly sharp and render one thousand passes typically leads to more work and a lower quality end result. I know lots of places still do that, though. Subtleties of lighting seem better captured making the surfaces correctly, rather than trying to clean them up in post.

            My whole point of this little example was to show that unless you want to mess with things a lot int he post, and know a lot of tricks for doing so, you might be better off "just" using an image filter!

            Comment


            • #51
              BTW, I hope I am not coming across as argumentative. Not my goal. I enjoy your insight, and that of Neilg and others on this board. I just like to share info and improve my knowledge while helping others based on my experiences. There is no RIGHT way to do anything.

              Comment


              • #52
                Originally posted by Joelaff View Post
                Correct, yes, but you would never see that in an image photographed (motion or still) with a lens. So it is not realistic for our VFX work.
                We do our own advanced glare effects with custom tools in Fusion. I am familiar with the idea, and I do think most renders need some. It still becomes difficult and extra work to fix the highlights, which often tend to crawl without applying what I would consider too much glare.
                The two go hand in hand, in real life: bright highlights bloom when seen through a lens.
                A lot, or a little, depends entirely on the intensity and the lens, ofc.


                As a comper myself I would say the opposite. Don't give me aliased trash I have to wrestle with to make look halfway decent Again, this is the advantage of a comper that knows rendering and knows where the best place to tweak something is, and a CGI team that knows comping. Not every problem is best solved in post. We often find ourselves re-rendering things with softer filters to get them closer to the end sharpness level. Everything is comped at at least 4k or 5k and then scaled down to whatever the final delivery is.
                Yeah, different houses approach things in different ways. We get the best results getting the renders as close as possible to begin with, but not if something is easier/faster in the post. For us, we found sticking with arbitrary rules to make things absurdly sharp and render one thousand passes typically leads to more work and a lower quality end result. I know lots of places still do that, though. Subtleties of lighting seem better captured making the surfaces correctly, rather than trying to clean them up in post.
                I am a comper too, or rather have been (with some high-level, high-stress work done.), before I chose LnR as a specialisation, around 12 years ago (ofc i still "comp" these days. I'm just not hired for one.).
                The thing is that i do not see LnR as anything else than a SERVICE to comp, you see.
                This tends to piss my fellow lighters off a bit, but tends to make the compers quite happy.
                Indeed, there is a fine line between doing it right in 3d and doing it right in post (or "fixing" it, as you put it.), and that's the one i'm threading here.
                I abhor the higher resolution rendering of beauty passes and elements in a time and age where we have, per bucket, resolutions 64 times that of the pixel (say, adaptive subdivision with a max of 3), although rendering double resolution IS necessary for some DATA pass (ie. anything which ends up being non-anti-aliased), as it's a mistake, if it was doable, to anti-alias a Z, velocity, or PPos pass, while we all want nice, soft masks to work with.
                In that type of scenario, the only way out is to render at least double res (but hey, lights and shaders off. who cares.), so that exact values can be selected before the down-scaling provides for gross AA.
                LnR has one role, in my personal opinion, and that is to get shaders, and lights, as close to the real counterparts as the technology allows.
                Rendering double to correct for wrong lighting and wrong shading (say, a directly visible light glinting away on a too reflective surface) is as close to madness, to my eyes, as going around one's own house with a baseball bat trying to hit a fly.
                Nor the fix for such mistakes ought to be a burden to Post.
                It will get better, in time, and already these days good advancements are being made in the shaders' and lights' department, both within softwares and in the broader physical sciences, so that it's easy to recover the correct data for both materials and lights, which alone cuts down old-school type of issues a lot.
                But hey, this is how i tackle my role, not how it ought to be done.
                My own opinions change as i learn new approaches, but so far, so good: with VRay, or rMan, or whathaveyounot, alone, with ten, or 50 lighters to TD for (in which case, good luck trying to get good stuff out of every one of them...).

                My whole point of this little example was to show that unless you want to mess with things a lot int he post, and know a lot of tricks for doing so, you might be better off "just" using an image filter!
                And my whole point was very close to yours as far as the argument's premises go, just that the conclusion is opposite because of a host of other factors which I have outlined.
                BTW, I hope I am not coming across as argumentative. Not my goal. I enjoy your insight, and that of Neilg and others on this board. I just like to share info and improve my knowledge while helping others based on my experiences. There is no RIGHT way to do anything.
                You ARE coming across as argumentative, and in the best possible way one could intend the meaning of the word.
                We're not trying to rationalise convictions, we're debating best practices with a critical approach, and a slew of supporting evidence, from both sides.
                "Right" , just as "wrong" are circumstantial terms, and quite flimsy, when taken into specific, and different from the originals, contexts.
                Bring it on, if it's synthesis we are looking for, we're getting there.
                Lele
                Trouble Stirrer in RnD @ Chaos
                ----------------------
                emanuele.lecchi@chaos.com

                Disclaimer:
                The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                Comment


                • #53
                  I do find it odd how some compers still think they need razor sharp renders. I would say their skills at comping are weak if they can't learn to spread and choke or add, subtract, min/max masks and such to handle soft edges. The edges from CG are rarely anywhere near as soft as something out of a keyer. We are forced to work with soft edges all the time from Keyeing, and roto of out of focus elements, etc.

                  In the end I think it comes down to most compers being control freaks (I know. I am one). I think most good artists are control freaks, though. I hate other people coloring my work, for instance. They never do it as well as I could

                  Comment


                  • #54
                    I do not care so much for objective opinions, risking sounding a proto-fascist, as much as i do for clarity of intent, and a clear, scientifically proven, way forward.
                    Compers learn to deal with what i give them, and appreciate the good sides of the approach, as much as they criticize the weak sides.
                    The discussion we've had, i've had to have *countless* times, at least a few per hire in a new company.
                    Sometimes, mediated approaches had to be had, for whichever POLITICAL reason.
                    Others, i was given carte blanche, along with full responsibility, and the unrelenting stress which comes with it.
                    My wish is for many more people to do as much reading, and data mining, as i tend to do on a daily basis (i LOVE to take even years of time out, to my own expense, to study further. I'd be a lesser human without it, not to mention a professional.), along with as much hard-nosed testing, so to be all on the same page.
                    That's an IDEAL, however.
                    What i appreciate is the passion, nevertheless, and the hard-headed, opinionated, convictions we all, in this industry, paid so hard to obtain.
                    That, i can deal with.
                    If with a poke, here and there. ^^
                    Lele
                    Trouble Stirrer in RnD @ Chaos
                    ----------------------
                    emanuele.lecchi@chaos.com

                    Disclaimer:
                    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                    Comment


                    • #55
                      R&D is good. Knowing your tools well is good. But at the end of the job all that matter is whether or not the job got done, and looks great. I have some guys who work for me that just turn out great work without being able to exactly tell me how or why. I was hesitant to hire them in the beginning, but the work speaks for itself.

                      Comment


                      • #56
                        Yes indeed.
                        Theory's all good and dandy, but the execution is what matters the most.
                        Pushing for changes towards what i feel is the right direction (with full knowledge of my absurdly abrupt limitations) is what keeps me in the game.
                        Or i'd be brewing my own beer for a living. :P
                        Lele
                        Trouble Stirrer in RnD @ Chaos
                        ----------------------
                        emanuele.lecchi@chaos.com

                        Disclaimer:
                        The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                        Comment


                        • #57
                          Double post deleted
                          Last edited by DPS; 13-02-2015, 12:51 AM.
                          Win10 x64, 3DS Max 2017 19.0, Vray 3.60.03
                          Threadripper 1950x, 64GB RAM, Aurous Gaming 7 x399,

                          Comment


                          • #58
                            Does anyone know exactly what the blur value does or any assumptions that the default value is for. I mean here is an awesome explanation from a mental ray book (maya and XSI).

                            Boaz livny writes:
                            The filter size value is meant to provide mental ray with additional insight into the relationship between texels and render pixels, where the filter size ideally represents the relationship between one render pixel and the number of texels contained within it (in the render pixel). Essentially, you may use a texture image that has been tiled several times using the remapping shader. So, as the ratio between the texture and render pixels increases, so should the filter size. In such cases you would want to specify a filter size that represents the new resized texture. For example, if the checker texture has a one-to-one mapping with render pixels (not tiled), then after tiling it 50 times, effectively scaling it down by 1 ÷50 of its size, you should specify a filter radius of 50 correlating to the new proportions. By default mental ray assumes that one texel corresponds to one render pixel; it has no knowledge of tilling. As a surface gets farther away, a larger filter size is used. By manually specifying larger filter sizes, you provide information about the relationship between texels and render pixels, such as after tilling an image several times.
                            Win10 x64, 3DS Max 2017 19.0, Vray 3.60.03
                            Threadripper 1950x, 64GB RAM, Aurous Gaming 7 x399,

                            Comment


                            • #59
                              That explanation is valid only for very specific use cases for mental ray in XSI. In 3ds Max, you can pretty much ignore that description altogether.

                              V-Ray normally does a pretty good job of figuring out how large a part of the texture is covered by a projected pixel. The blur value is an additional multiplier for "artistic" purposes and in an ideal world, with perfect texture filtering, it won't be needed at all and would be fixed to 1.0 always. However, because the widely used mip-map/pyramidal filtering is far from ideal, people use the value to make their textures appear sharper. Elliptical filtering to a large extent solves this issue.

                              Best regards,
                              Vlado
                              I only act like I know everything, Rogers.

                              Comment


                              • #60
                                Then why not rename VrayHDRI to VrayBitmap to avoid confusion? (though for VrayHDRI map users that could be confusing).

                                Comment

                                Working...