So you need to render twice and then go manually through the denoiser process. I wonder if we really save time finally. But when the whole thing will be automated maybe we could save a bit of time, my question is how much?
Announcement
Collapse
No announcement yet.
Denoiser
Collapse
X
-
__________________________________________
www.strob.net
Explosion & smoke I did with PhoenixFD
Little Antman
See Iron Baby and other of my models on Turbosquid!
Some RnD involving PhoenixFD
-
Originally posted by jstrob View PostSo you need to render twice and then go manually through the denoiser process. I wonder if we really save time finally. But when the whole thing will be automated maybe we could save a bit of time, my question is how much?
Best regards,
VladoI only act like I know everything, Rogers.
Comment
-
Hi All,
First of all, the Maya python script is the GUI way of rendering twice and automatically calling the filter, after the AOVs are generated and the filter consumes the AOVs to generate the output. The python script is exposed to the user and provides a starting point. The filter is actually an executable that is called at the end of the script. This is the easiest "automation" for the Maya-VRay environment. We will have similar script for 3DS Max-VRay environment as well.
Second, the two renders are done at SPP/2 each, where SPP (samples per pixel) is assumed to be specified as a low value by the user (32 or 64), so you are saving time to by specifying a low sampling size to begin with.
Third, if you want to truly automate your own way and generate the two sets of AOVs with your own script, we have the standalone version of the Altus filter that takes command line arguments (Linux or Windows) and produces the desired output. This can be integrated in any of your workflow/automation.
Hope this helps. Appreciate any feedback on improving this filter plugin architecture.
Comment
-
It might be an openCL version issue. Can you please send your version and OS information along with screen shot of the crash to support@innobright.com?
Comment
-
The current GUI in not "production friendly". We can control nothing because the script start some renders. No Deadline env, nothing.
So, i hesitate to rewrite all the Maya plugin....
If i do this i will :
- remove the installer. Give regular Maya files (plugin/icons/shelf/script)
- add a button to generate good render element in the scene. Currently the script generate "on fly" and can clash with current render element. We dont have any controls. Or i can do this in Python scene access. It will be completely clean.
- generate RENDER LAYERS to render 2 images. With this, we can after do what we want (send to deadline, send to maya batch....)
- create a fast QT UI to generate the final image
Simply.Last edited by bigbossfr; 28-10-2015, 05:47 PM.
Comment
-
It makes sense, but would be much easier for users to automate that if possible in some way. I also read of Renderman denoiser using inter-frames (frame before and frame after) to denoise the images, that allows (I believe) to avoid the double sequence rendering, how they do it with moving camera is a mystery to me.
Comment
-
Originally posted by bigbossfr View PostThe current GUI in not "production friendly". We can control nothing because the script start some renders. No Deadline env, nothing.
So, i hesitate to rewrite all the Maya plugin....
If i do this i will :
- remove the installer. Give regular Maya files (plugin/icons/shelf/script)
- add a button to generate good render element in the scene. Currently the script generate "on fly" and can clash with current render element. We dont have any controls. Or i can do this in Python scene access. It will be completely clean.
- generate RENDER LAYERS to render 2 images. With this, we can after do what we want (send to deadline, send to maya batch....)
- create a fast QT UI to generate the final image
Simply.
The purpose of the script when using the start renders button is to do single frame renders due to the timeline of rendering animation and doing it on a single machine is to say the least not best practices. Otherwise the Altus Scenes Export button will create the files for rendering that you can submit to a system such as deadline, smedge, muster, rush, etc. We did not build in integration into a single farming software because there are many options available and scene export was the better option for flexibility.
If you want to generate the aovs without using the script the needed aovs for each image to filter are as follows:
Beauty b0 (seed x) and b1 (seed y)
BumpNormals b0 (seed x) and b1 (seed y)
worldPosition b0 (seed x) and b1 (seed y)
Matte shadow b0 (seed x) and b1 (seed y)
diffuse albedo b0 (seed x) and b1 (seed y)
caustics b0 (seed x) and b1 (seed y) (this is scene specific so not always necessary)
These need to be generated for each render layer so that the filter can properly denoise the images.
IE if you are rendering car_scene_01.ma:
car_layer
bg_layer
tire_smoke_layer
all layers need the aovs and inputs generated for each layer separately.
To add, the altus.exe always runs directly from the command line, the python script is a gui to generate the needed aovs for commandline interaction with the altus.exe.
If you type altus --help in a cmd window you will get the list of needed inputs and what flags to use. The --config flag allows you to submit an altus.cfg file as the input args. This also allows for wrappers to be written for render farm software such as deadline that will execute a shell script calling the altus.exe as a dependency after a set of renders has completed.Last edited by innobright; 28-10-2015, 09:50 PM.
Comment
Comment