Announcement

Collapse
No announcement yet.

Denoiser

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #76
    You guys are the lucky one, I can't even access the download page on their site

    Click image for larger version

Name:	SNAG-0224.jpg
Views:	1
Size:	109.1 KB
ID:	858742

    Stan
    3LP Team

    Comment


    • #77
      Originally posted by jstrob View Post
      Hi Laserscwert,

      Do you have any example of before after video processed with avisynth and MC_Spuds?
      Sorry for the late reply... here's a before/after comparison: http://i.imgur.com/eGOtE38.png

      The important thing is you have to have "moving" noise in the image, so no "lock noise pattern" activated, so that the difference from one frame to the next (or over the course of several frames) lets the algorithm identifiy what's noise and what isn't. Depending on the amount or nature of the material, I'll either denoise the renderings first, before compositing, or I'll just denoise the finale composite.
      Last edited by Laserschwert; 29-10-2015, 02:44 AM.

      Comment


      • #78
        i've got animation that will be rendering in a couple of days, so I would be tempted to see how this performs. I would need to be able to download the soft though
        Stan
        3LP Team

        Comment


        • #79
          Originally posted by innobright View Post
          The current maya python script has an option that generates the temp mb files for submission to a render farm such as deadline. Its the button called Altus Scenes Export.

          The purpose of the script when using the start renders button is to do single frame renders due to the timeline of rendering animation and doing it on a single machine is to say the least not best practices. Otherwise the Altus Scenes Export button will create the files for rendering that you can submit to a system such as deadline, smedge, muster, rush, etc. We did not build in integration into a single farming software because there are many options available and scene export was the better option for flexibility.

          If you want to generate the aovs without using the script the needed aovs for each image to filter are as follows:

          Beauty b0 (seed x) and b1 (seed y)
          BumpNormals b0 (seed x) and b1 (seed y)
          worldPosition b0 (seed x) and b1 (seed y)
          Matte shadow b0 (seed x) and b1 (seed y)
          diffuse albedo b0 (seed x) and b1 (seed y)
          caustics b0 (seed x) and b1 (seed y) (this is scene specific so not always necessary)

          These need to be generated for each render layer so that the filter can properly denoise the images.
          IE if you are rendering car_scene_01.ma:

          car_layer
          bg_layer
          tire_smoke_layer

          all layers need the aovs and inputs generated for each layer separately.

          To add, the altus.exe always runs directly from the command line, the python script is a gui to generate the needed aovs for commandline interaction with the altus.exe.
          If you type altus --help in a cmd window you will get the list of needed inputs and what flags to use. The --config flag allows you to submit an altus.cfg file as the input args. This also allows for wrappers to be written for render farm software such as deadline that will execute a shell script calling the altus.exe as a dependency after a set of renders has completed.
          Originally posted by innobright View Post
          Otherwise the Altus Scenes Export button will create the files for rendering that you can submit to a system such as deadline, smedge, muster, rush, etc
          Generally in production, we send scene directly a scene from Maya for example. We don't not export a Maya scene and after submit this Maya scene. This is the job of the Deadline submitter (to export a scene), no Altus.
          Altus is here to setup the scene, not to export.
          It is why We just need button to create :
          - 2 renderLayer X and Y, with override on the seed and other things
          - create renderelement
          - user altus standalone

          Not more not less.
          www.deex.info

          Comment


          • #80
            Another question : this denoiser re-create a new beauty image.
            But in VFX, we can't loose all render elements in compositing.
            Anyone can explain me how to use this denoiser and denoise the beauty + all render elements please ?
            www.deex.info

            Comment


            • #81
              Originally posted by bigbossfr View Post
              Generally in production, we send scene directly a scene from Maya for example. We don't not export a Maya scene and after submit this Maya scene. This is the job of the Deadline submitter (to export a scene), no Altus.
              Altus is here to setup the scene, not to export.
              It is why We just need button to create :
              - 2 renderLayer X and Y, with override on the seed and other things
              - create renderelement
              - user altus standalone

              Not more not less.
              Our expectation was that a studio using any package specific submission tools would create the needed wrappers to integrate into their pipeline. As I previously mentioned there are many different render queuing software packages (deadline, muster, rush, smedge, etc.) and we did not do an over all integration with all the possible options. Our software is a command line tool, and the files that are saved with the scene export button can be easily opened in maya and submitted using the integrated deadline tools. The scripts that we supplied are not meant to be a one click studio solution, they were designed to be a test environment that can be used to understand the inputs needed for the altus and how to compose possible more complex solution for your specific needs.

              Comment


              • #82
                Nothing stops you from processing all the needed render elements as "beauties", and comp the denoised image.
                Maybe not from the Maya interface, but surely as a python script, that should be well within the scope of the smallest of TD departments.
                That, along with the integration of the render and denoise jobs with the Deadline/Backburner/preferred renderfarm management solution a company has chosen.
                I think looking simply at the files installed provides the biggest of clues as to the intended pipeline usage for THIS release at least...
                Lele
                Trouble Stirrer in RnD @ Chaos
                ----------------------
                emanuele.lecchi@chaos.com

                Disclaimer:
                The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                Comment


                • #83
                  Originally posted by Rens View Post
                  Those would be VRaySamplerInfo set to Point World, and Normal vector with bump set to world or cam, I don't know which is needed here.
                  To address this, the needed normal from a sampler info would be cam. This generates the forward facing normals that are needed.

                  Comment


                  • #84
                    Originally posted by innobright View Post
                    Our expectation was that a studio using any package specific submission tools would create the needed wrappers to integrate into their pipeline. As I previously mentioned there are many different render queuing software packages (deadline, muster, rush, smedge, etc.) and we did not do an over all integration with all the possible options. Our software is a command line tool, and the files that are saved with the scene export button can be easily opened in maya and submitted using the integrated deadline tools. The scripts that we supplied are not meant to be a one click studio solution, they were designed to be a test environment that can be used to understand the inputs needed for the altus and how to compose possible more complex solution for your specific needs.
                    It looks like you didn't get it...
                    I don't ask you to create a Deadline submitter or something like this. I ask you to have a more "logical" workflow. It looks like you didn't read my message because i didn't ask you to create submitter to different render managers.
                    Currently, when we push the button "render", you are creating render elements in the scene. So you are modifying the CURRENT scene. The state of the scene is not the same after your process.
                    On the other side, you are generating/exporting 2 scenes on the fly with others modifications/edits, that are not in the CURRENT scene.
                    There is not sens in term of workflow.

                    You have to choose between :
                    1) modifying the CURRENT scene, with all setups (render element, 2 seeds in different render layers....). And after the user USES the scene like it wants (submit to deadline, batch render....)
                    2) generating on the fly 2 scenes and don't add render elements on the CURRENT scene

                    Currently this is a mix between modifying the current scene and generating/exporting some scenes on fly.

                    It is why i told, to have a better integration, personally, to choose the choice 1) and create a GUI to :
                    - generate render elements in the current scene
                    - create render layers with override on the seed (X/Y)
                    - a GUI to manage the output

                    With this, the user/studio will do what he wants with the scene (because the scene has a good setup), like send to deadline with her OWN tools or other tools (i never asked you to create submitter tools).

                    Thank you,
                    Damien
                    Last edited by bigbossfr; 29-10-2015, 03:25 PM.
                    www.deex.info

                    Comment


                    • #85
                      Originally posted by bigbossfr View Post
                      It looks like you didn't get it...
                      I don't ask you to create a Deadline submitter or something like this. I ask you to have a more "logical" workflow. It looks like you didn't read my message because i didn't ask you to create submitter to different render managers.
                      Currently, when we push the button "render", you are creating render elements in the scene. So you are modifying the CURRENT scene. The state of the scene is not the same after your process.
                      On the other side, you are generating/exporting 2 scenes on the fly with others modifications/edits, that are not in the CURRENT scene.
                      There is not sens in term of workflow.

                      You have to choose between :
                      1) modifying the CURRENT scene, with all setups (render element, 2 seeds in different render layers....). And after the user USES the scene like it wants (submit to deadline, batch render....)
                      2) generating on the fly 2 scenes and don't add render elements on the CURRENT scene

                      Currently this is a mix between modifying the current scene and generating/exporting some scenes on fly.

                      It is why i told, to have a better integration, personally, to choose the choice 1) and create a GUI to :
                      - generate render elements in the current scene
                      - create render layers with override on the seed (X/Y)
                      - a GUI to manage the output

                      With this, the user/studio will do what he wants with the scene (because the scene has a good setup), like send to deadline with her OWN tools or other tools (i never asked you to create submitter tools).

                      Thank you,
                      Damien
                      Our general interaction so far is that most users do not want a direct edit and alteration of their primary scene files unless necessary for their workflow, and in that case we would want the users to generate that script since it would be specific to their needs and pipeline. If you are seeing added aovs in your original scene file from our script please email our support. support@innobright.com with a description and some screen shots of the problem so that we can do a debug on our code. Also please include your list of concerns so that we can start a change log for the script.
                      Last edited by innobright; 29-10-2015, 04:48 PM.

                      Comment


                      • #86
                        I'm pretty sure what we need is to have Vlado write the code for Atlus to be integrated into Vray, not innobright. It would be ideal to have it as a check box, just like enabling Embree. Everything is automated and your final render is the final render with Atlus denoising.

                        Comment


                        • #87
                          Well, even if the executable accepted some simple flat form of floating point rgb values as memory buffers, much could be done through scripting alone without the user (and its scene) being any the wiser.
                          However, let's keep this positive: we had nothing quite like this technology only yesterday, so we can rest assured it will keep growing, particularly in the (comparatively) simple integration part.

                          With what is there right now, honestly, i can't see all the issues you are bringing forward, Damien, with changing the list of rendered AOVs (what's wrong with adding the five AoVs Altus needs, naming them with some discernible prefix, and removing them after the fact? The same goes for any other parameter one may have changed from the original scene. The save flag will have been broken, sure, but nothing will really have changed. This is what i'm doing on the max side, for instance.), or with saving multiple scenes to disk.
                          Both seem to me to be VERY common practice in nigh any pipeline, be it for (over?)versioning or backup, or simply farm submission (Say, beauty and data passes saved out from a master scene, with wildly different settings and AoVs.) and tracking.
                          In fact, come to think of it, i may offer the choice to save the scene before processing in max, or save a scene per pass, or save the settings changed so to be able to replicate the operation exactly, but hey, i think i'll leave that to adult and consentient users, willing to modify the skeleton script to suit their specific needs. :P
                          At first, at the very least.
                          Lele
                          Trouble Stirrer in RnD @ Chaos
                          ----------------------
                          emanuele.lecchi@chaos.com

                          Disclaimer:
                          The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                          Comment


                          • #88
                            Originally posted by ^Lele^ View Post
                            With what is there right now, honestly, i can't see all the issues you are bringing forward, Damien, with changing the list of rendered AOVs (what's wrong with adding the five AoVs Altus needs, naming them with some discernible prefix, and removing them after the fact?
                            In production, you never have to modify a scene "on fly" and after come back to original settings. Why ?
                            1) Because you don't know exactly what the script in doing
                            2) You can break completely a scene if the script crash.
                            3) We customize a lot Maya/VRay. The script can clash with existing tools.

                            Imagine for example we disable renderElements. The script will generate renderElements. But because renderElements are disabled, Altus will fail because no renderElements will be rendered (because we disabled renderElements in render global (disable all)).
                            But imagine the script of Altus forces to enable renderElements (enable All) because it want to generate renderElements "on fly". But me, the user, intentionally (my choice) i disabled renderElements because i don't need to render renderElements for this scene/render layer. Why a script will enable renderElements and render all renderElements that i don't want to render ? It will break my setup and render things that i don't want (!).
                            Bad workflow.
                            And this is just 1 example...

                            The only good workflow if you want to do all things "on fly" is to use "Python scene access". It will never break your original scene and it will be "really" on fly.

                            But if you want a solid thing, you don't have to modify the scene on the fly and come back after to original settings.

                            After i tell this because of my experience (i do pipeline every day for many years). If you think my example (and this is just 1) is wrong, i would be happy to know how it is wrong.
                            www.deex.info

                            Comment


                            • #89
                              Originally posted by bigbossfr View Post
                              In production, you never have to modify a scene "on fly" and after come back to original settings. Why ?
                              1) Because you don't know exactly what the script in doing
                              2) You can break completely a scene if the script crash.
                              3) We customize a lot Maya/VRay. The script can clash with existing tools.

                              Imagine for example we disable renderElements. The script will generate renderElements. But because renderElements are disabled, Altus will fail because no renderElements will be rendered (because we disabled renderElements in render global (disable all)).
                              But imagine the script of Altus forces to enable renderElements (enable All) because it want to generate renderElements "on fly". But me, the user, intentionally (my choice) i disabled renderElements because i don't need to render renderElements for this scene/render layer. Why a script will enable renderElements and render all renderElements that i don't want to render ? It will break my setup and render things that i don't want (!).
                              Bad workflow.
                              And this is just 1 example...

                              The only good workflow if you want to do all things "on fly" is to use "Python scene access". It will never break your original scene and it will be "really" on fly.

                              But if you want a solid thing, you don't have to modify the scene on the fly and come back after to original settings.

                              After i tell this because of my experience (i do pipeline every day for many years). If you think my example (and this is just 1) is wrong, i would be happy to know how it is wrong.
                              Thank you for voicing your concerns and highlighting this corner case, please compile all of your concerns and send a list including any bugs that you may have found to support@innobright.com
                              Last edited by innobright; 29-10-2015, 08:52 PM.

                              Comment


                              • #90
                                Scratch this, sorry

                                Thanks
                                Stan
                                Last edited by 3LP; 30-10-2015, 12:15 AM.
                                3LP Team

                                Comment

                                Working...
                                X