Announcement

Collapse
No announcement yet.

Lens effects not rendering the same in 3.5??

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Access to nightly builds (except the ones for V-Ray for Blender) is given per request.
    Please contact us at support@chaosgroup.com and let us know why do you need access to nightly builds.
    Svetlozar Draganov | Senior Manager 3D Support | contact us
    Chaos & Enscape & Cylindo are now one!

    Comment


    • #32
      Ok thank you I shall do that now.

      Comment


      • #33
        lens effects issues as well

        We are having the same problem and even installed the nightly build from March 2nd. We are looking at rolling back to 3.4003. I have attached an image showing the issue and our glare settings. Same settings same scene unaltered and the effect is non existant. How can we achieve the same result using 3.5.


        Kind Regards

        Ryan
        Click image for larger version

Name:	test.jpg
Views:	2
Size:	389.5 KB
ID:	866835

        Comment


        • #34
          Hmmm...you have From Image on in Glare, yet I do not see a bitmap loaded...

          -Alan

          Comment


          • #35
            Originally posted by prey1979 View Post
            How can we achieve the same result using 3.5.
            I can't really give you a meaningful answer without the scene. You can either post it here or send it to vlado@chaosgroup.com

            Best regards,
            Vlado
            I only act like I know everything, Rogers.

            Comment


            • #36
              Hi Ryan,

              The problem I had when switching to from 3.4 to 3.5. Was now in 3.5 vray lens effects are computed during render time. where as in 3.4 lens effects appeared almost like in post when the whole frame finished rendering.

              So I had Reinhard set to 0.1. So in vray 3.5 the 0.1 colour mapping is being applyed to the lens effects and taking away the intensity.....therefore not even showing at all.

              Vlado informed me the way to get around this is to leave reinhard at 1.0 and change the burn value in the vray frame buffer to 0.1 and that way i was able to get the same results as 3.4.

              As mentioned above you are using 'from image' I like to use 'glare type from camera parameters' and you can play around with 'f' number in the frame buffer to have better control over the intensity and size lens effects.

              Also make sure you look at the new 'effects channel' in the frame buffer as all effects appear on that channel.

              hope that helps!

              Comment


              • #37
                same here to me...
                Attached Files
                www.vis-art.de
                www.facebook.com/visart3d

                Comment


                • #38
                  Originally posted by william.morris View Post
                  I am like you and many others, i will reduce the burn value, to reduce burnt out areas to get a result that looks good and not too worried if its not 100% physically correct. I started using lens effects after reading a tutorial the boundary had produced and their burn value was 0.05 when using lens effects with vray 3.4. now if there were to render the same scene in vray 3.5 with 0.05 burn value, there would be no intensity in the lens effects at all and would not be visible.
                  I gotta say- leaving reinhard at 1.0 and saving renders out as full float 32bit open exrs and then adjusting them with photoshop's camera raw filter (or any other 32bit editing technique) gives you soooo much easy control over the look of you're final images. All you have to do is set your lights to realistic sizes and intensities and you never need to worry about "blown out" areas. They're not really blown out when you edit in 32bit!

                  Comment


                  • #39
                    I'm not quite sure why you guys hate burnouts so much. They are a natural part of any photograph. Attached is a photo of me with a light fixture at the background. It's a real photograph so you can't get any more "filmic" than that. Two things that can be noticed - the area around the light is burnt out, while my face is too dark. No real-world camera can equalize those two areas in one picture, similar to how no real-world camera can capture both an exterior and interior in one shot with both appearing at normal exposure. Photographers deal with this issue by introducing bounce cards or extra lights in the scene to bring up the general ambient light level so that the burnouts are not as apparent. No amount of LUTs or camera response curves or "filmic" mapping can fix this for you.

                    For sure, it is possible to apply dynamic range compression to the HDRI image to equalize the colors (for an example, see many of the night-time or interior panoramas here: http://panomagic.eu/), however this requires post-processing of the image to equalize the light levels. In my opinion the results look rather flat and unrealistic, but they do show details better. This is not a result that you get directly out of the camera though.

                    [EDIT] From what I could find out, George Palov is usind HDR Expose for this (http://pinnacleimagingsystems.com/products/hdr-expose-3) and here are some sample results: http://www.geopalstudio.com/work/kamenitza/
                    http://www.geopalstudio.com/work/officef/

                    These are based on real photos, of course, but you can apply the same approach to renderings. Again, this is not what came out of the camera directly!

                    Here is the same approach with HDR Expose applied to V-Ray renders (although I wish they hadn't used the irradiance map for it )
                    http://www.studiocreative.bg/work/3g/

                    It should be noted that this kind of global dynamic range compression cannot be achieved with simple curves, it needs some more global analysis of the whole image. In addition to HDR Expose, some freeware software that can do similar things is Picturenaut http://www.hdrlabs.com/picturenaut/

                    If we are going to put something in the VFB, it will be this, and not some "filmic" crap...

                    Best regards,
                    Vlado
                    Attached Files
                    Last edited by vlado; 04-03-2017, 05:41 AM.
                    I only act like I know everything, Rogers.

                    Comment


                    • #40
                      My Dad's a photographer and growing up I would always flip through one of the yearly journals he would get- the communication arts photography competition. Kind of interesting looking back on it- there was a really clear trend that happened and lasted for a few years right around the time digital cameras were becoming really impressive + commercially available and Photoshop was beginning to get used heavily. All the images that made it into the journal were versions of the classic "high dynamic range image" (all lighting levels perfectly compressed, way too much detail, overly sharp). It seemed like all of the sudden photographers realized they could make images like this and couldn't get enough of it. But relatively quickly that trend ended and I don't think we will come across an image like that in communication arts ever again. It seemed like photographers had realized how to compress blown out or black areas and thought that because they could do it it should be done. Sometimes I feel like the attitude in archviz is similar- but just because we can control a rendering to show any detail we want in an image doesn't mean we should...but that's just my opinion, people can make whatever kind of images they want

                      Comment


                      • #41
                        Originally posted by vlado View Post
                        No real-world camera can equalize those two areas in one picture, similar to how no real-world camera can capture both an exterior and interior in one shot with both appearing at normal exposure. Photographers deal with this issue by introducing bounce cards or extra lights in the scene to bring up the general ambient light level so that the burnouts are not as apparent. No amount of LUTs or camera response curves or "filmic" mapping can fix this for you.
                        To expand on this, it's down to the CCDs limitations (in analogue terms, the stops a film could take in one exposure, or the "exposure range".).
                        Terrible DSLR tech, eh? Not quite: it's no coincidence that the average analogue film could take 14 stops of exposure, and that a full frame CCD captures 14 bits of RAW data: DSLRs needed to match the analogue qualities of film, and that's where they stopped since, pretty much (each bit of depth an analogue stop.).
                        Now, you are all rendering in a 32bit (32 stops! It's an *immense* range. Just huge. Everyone loved it.) deep framebuffer, and viewing the contents of that on an 8 bit monitor, likely. I do 10, and it ain't cheap, yet, but the 2 stops of difference, for content which has the data to display, is staggering: watching the same movie in 8 or 10 bit will make it very different, and the 10Bit version will become initially difficult to follow, as the eyes will latch on to the superwhites, in awe.
                        Imagine from 8 to 32...

                        So, just like in the analogue world a photographer knowing their trade would ensure the right tonal compression to fit the negative film 14 stops of range to the positive print bound by the paper's exposure range, in the digital world we should all know that we are seeing but a *tiny* slice of that potential dynamic range in our monitors.
                        Analogue film did burn out if one went past 14 stops (or whatever the range of the specific film type), while trying to clip a 32bit value is a hard task indeed (16 bit halfs, perhaps. 32bits? put a brick on that "0" key.).
                        Saving a EXR (half or full float matters not, really, to the concept), is equivalent to saving a 32bpc RAW file from a CCD.
                        Opening that RAW file allows one to take those immensely big, or immensely small pixel values, and play with them at one's heart's content, while fitting the image's Dynamic Range (Or exposure range.) to the Display's as best one chooses (global, local, mixed contrast analysis, color space conversions and off-space magic. literally, anything.).

                        *Any* operation which saves an LDR image with or without CCs applied from the VFB has wasted the RAW data (if the original VFB image is gone, of course), equivalent to saving Jpegs with a full frame, full dynamic range camera.
                        Why would anyone want to do that, so to be able to color correct an image in the camera viewfinder, instead of at the proper grading stage (Enlargers, anyone? Filters on the lens, masking, mutiple exposure layers?) is something i will never understand.

                        One thing only is worse than saving an LDR image off the VFB (which is even right, conceptually, if difficult to understand as it's crippling.), and that is messing with the color mapping settings.
                        It's something that has *no* equivalent in the real world, as there is no known way to globally skew the decay of the energy transported by photons in the real world.
                        We can add more photons (said bounce cards and added fixtures), we can take some of them away (shades, shadows, filters), but the *one* thing we can't do is decide how they decay.
                        So it begs the question as to why people would enjoy that, besides some wicked testing of an engine's limits (Oh, if Vlado knew the things i did to V-Ray. Oh wait, he does... ^^), instead of taking hints from established photographers all over the world.

                        V-Ray, in other words, is a Camera and a positive (as opposed to the traditional negative analogue) HDR film.
                        What the VFB shows is the equivalent of an in-camera (i wished. on steroid.) viewfinder, and like those it offers a modicum of controls meant to be used to explore image variations and qualities, *not* meant to be used like full-blown color grading tools.
                        From the LWF VFB, the proper method is to save the RAW file, all the dynamic range intact (Intact! The EXR *should* discard any VFB-bound CCs. Just like a DSLR will save a jpeg with any CC applied, the RAW file rigorously untouched.), and move to a Post application to complete the transposition of the image dynamic range into the monitor's (negative film to printing paper), and subsequent color corrections.
                        The role of the VFB I personally feel entirely exhausted when it's able to show me the raw data and save it out for later processing without any change whatsoever to its contents (f.e. around 2008 Canon 3/4rs used to sharpen even the raw files. cue fringing on highlights at dynamic range edges. Not a given that data should stay unprocessed.).
                        The more it attempted veering into the Post Production world, besides the flexibility to talk and connect to the most valid of such tools (through OCIO, ICC, file formats...), the more it would have to take bigger and bigger chunks of said tool's ecosystem of ancillary techs, and ultimately it would become very, very complex to even set up right for basic work (Anyone with any experience building a color pipeline will testify to the fact that it's really not an intuitive field.).
                        Last edited by ^Lele^; 04-03-2017, 04:08 PM.
                        Lele
                        Trouble Stirrer in RnD @ Chaos
                        ----------------------
                        emanuele.lecchi@chaos.com

                        Disclaimer:
                        The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                        Comment


                        • #42
                          Simple explanation:
                          I have projects where I need to render thousands of images of a single scene with small variations. Every additional step added to the process is wasting time and money. If I can get a good-enough result out of vfb with it's limited tools, it's money saved.
                          Please understand, that not everyone needs to squeeze out every last bit of information and control of every aspect of the image.
                          So a quick and simple way to compress that 32bit dynamic range into 8 bits shown on every monitor/laptop/smartphone is important and color mapping does just that. Instead of Reinhard's single burn value, something like lightroom's shadows/highlights/blacks/whites would be a lot more flexible.

                          Do you think the majority of photographers are raw-only? No, the overwhelming majority want good jpegs out of the camera (with good tools in camera to set up exactly how those jpegs look). Not everyone has time and enthusiasm to process each and every photo they take, same with 3d renders.
                          The elitists of course regard them as noobs and idiots, but it doesn't change the fact that for many people simplicity and speed is important.

                          Originally posted by ^Lele^ View Post
                          *Any* operation which saves an LDR image with or without CCs applied from the VFB has wasted the RAW data (if the original VFB image is gone, of course), equivalent to saving Jpegs with a full frame, full dynamic range camera.
                          Why would anyone want to do that, so to be able to color correct an image in the camera viewfinder, instead of at the proper grading stage (Enlargers, anyone? Filters on the lens, masking, mutiple exposure layers?) is something i will never understand.
                          ...
                          From the LWF VFB, the proper method is to save the RAW file, all the dynamic range intact (Intact! The EXR *should* discard any VFB-bound CCs. Just like a DSLR will save a jpeg with any CC applied, the RAW file rigorously untouched.), and move to a Post application to complete the transposition of the image dynamic range into the monitor's (negative film to printing paper), and subsequent color corrections.
                          The role of the VFB I personally feel entirely exhausted when it's able to show me the raw data and save it out for later processing without any change whatsoever to its contents (f.e. around 2008 Canon 3/4rs used to sharpen even the raw files. cue fringing on highlights at dynamic range edges. Not a given that data should stay unprocessed.).

                          Comment


                          • #43
                            Originally posted by viscorbel View Post
                            Simple explanation:
                            I have projects where I need to render thousands of images of a single scene with small variations. Every additional step added to the process is wasting time and money. If I can get a good-enough result out of vfb with it's limited tools, it's money saved.
                            Please understand, that not everyone needs to squeeze out every last bit of information and control of every aspect of the image.
                            So a quick and simple way to compress that 32bit dynamic range into 8 bits shown on every monitor/laptop/smartphone is important and color mapping does just that. Instead of Reinhard's single burn value, something like lightroom's shadows/highlights/blacks/whites would be a lot more flexible.
                            You misunderstand me: the data is there, whether you realise it, and use it, or not. Defaulting to discarding it is at best misguided, given there are an infinite number of ways to automate just what you want from the VFB onwards, while the VFB itself lacks any easy way to do so (albeit it's very doable with scripting. is that what you refer to? If not, how do you treat the thousands of variations from manual VFB controls, pray tell?).

                            Do you think the majority of photographers are raw-only? No, the overwhelming majority want good jpegs out of the camera (with good tools in camera to set up exactly how those jpegs look). Not everyone has time and enthusiasm to process each and every photo they take, same with 3d renders.
                            The elitists of course regard them as noobs and idiots, but it doesn't change the fact that for many people simplicity and speed is important.
                            I strongly disagree with your statement there.
                            Yes, professional photographers ALL save Raw and post-process their results.
                            Software exists to batch process thousands of images in one click, exactly and with superior results than any camera software could hope for.
                            If you're trying to say a 422 jpg at 90% is good enough, i think it's not an argument which will ever find any legs in here.
                            That you should call elitist those which know what they are doing because they spent time researching the subject matter is a bit offensive, to my ears, as there is absolutely no argument put forward to justify such a statement.
                            If we're trying to make culture out of scattered knowledge, the least we all should do if contributing would be trying to stay factual and actionable.
                            Lele
                            Trouble Stirrer in RnD @ Chaos
                            ----------------------
                            emanuele.lecchi@chaos.com

                            Disclaimer:
                            The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                            Comment


                            • #44
                              For my example - I have a scene where a single object in it changes it's shaders hundreds of times. (set up by a script)
                              What is so wrong by doing a single set of adjustments directly in the vfb for the first render and then just running a batch render for the rest? When they are done send the whole bunch to the client. done. end of story.
                              Why would I WANT to add another step of complexity to the process?

                              Of course not every job is like this, there are jobs where I have to spend hours and hours in post polishing the image to perfection, and I'm happy to have all the 32 bits to work with, but it doesn't mean they are the only type of projects that have the right to exist. It seems you live in another world if you think that it is so.

                              I won't even try to dispute your blanket statement that "Yes, professional photographers ALL save Raw and post-process their results." It is arrogant to claim knowledge about all of anything. Or did you mean something along the lines, "ah but no true scotsman"?

                              Lele, to be honest, you are the one who is coming off a bit offensive to me. It seems that you see the world in black and white - your posts most of the time read that it's either you do it the correct way (your way) or the stupid way.

                              Comment


                              • #45
                                Originally posted by viscorbel View Post
                                For my example - I have a scene where a single object in it changes it's shaders hundreds of times. (set up by a script)
                                What is so wrong by doing a single set of adjustments directly in the vfb for the first render and then just running a batch render for the rest? When they are done send the whole bunch to the client. done. end of story.
                                Why would I WANT to add another step of complexity to the process?
                                Client ask for one change in color tone on all the set. Do you rerender them all, happy you spent those money?
                                Flexibility, that's why.

                                Of course not every job is like this, there are jobs where I have to spend hours and hours in post polishing the image to perfection, and I'm happy to have all the 32 bits to work with, but it doesn't mean they are the only type of projects that have the right to exist. It seems you live in another world if you think that it is so.
                                You keep misunderstanding me. I merely say waste not, want not.
                                As such, it shouldn't be on V-Ray to provide by default for the most limiting of workflows, but rather the contrary.
                                There's always time to save your LDR image off the HDR one in the VFB, and all the options you want.
                                You want to do all your shoots with Jpegs off a camera capable of raw, go right ahead.

                                That V-Ray should artificially skew renders by default because of the belief that burnouts aren't physically correct, a belief for which no proof has been given (rather, there's plenty for the contrary), seems to me a non-starter.
                                I won't even try to dispute your blanket statement that "Yes, professional photographers ALL save Raw and post-process their results." It is arrogant to claim knowledge about all of anything. Or did you mean something along the lines, "ah but no true scotsman"?
                                You should. Find me a story of a photographer of renown which outlines their mad shooting skills right into a non-posted Jpeg, and i'll find you a hundred of stories for such renowned ones doing it the way technology worked so hard to enabled them.
                                Your statement was as generic and blanket as the one you got in response, you see, and because i am quite convinced of my points through factual data, i'd rather you did the next move first.
                                Lele, to be honest, you are the one who is coming off a bit offensive to me. It seems that you see the world in black and white - your posts most of the time read that it's either you do it the correct way (your way) or the stupid way.
                                Fair enough.
                                Reread my posts, and you'll find the language never, once, inclusive: i never, once, claim there is only one way.
                                I claim there are infinitely better ways than doing things right in the VFB, and explain at least some of the basic facts, and reasons behind such workflows.
                                I also warn, for the millionth time, that the color mapping should be left untouched, ten years after LWF was well laid out and understood, because the approach is non-physical, and wastes said exact, hdr, data in irrecoverable ways.
                                Nowhere i speak of personal preference, nowhere i judge anyone's workflow (even in your case, where you offer a specific case to counter a general workflow approach, i ask.), nowhere i claim what i say is unshakable truth (in fact, google away, and please show me where i got stuff wrong, for surely i did!).

                                If that offended you, i can do little to change it, as it's in the past, but would surely love to hear exactly what it is which you found offensive, so to be able to formulate at least an excuse for it.
                                Lele
                                Trouble Stirrer in RnD @ Chaos
                                ----------------------
                                emanuele.lecchi@chaos.com

                                Disclaimer:
                                The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                                Comment

                                Working...
                                X