Announcement

Collapse
No announcement yet.

Various requests and observations.

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Various requests and observations.

    Hello,
    Here's a list of some requests or workflow observations I made while making end product architecture visualizations in Rhino+Vray.

    Here are some small things about v-ray in Rhino that would make my workflow a lot easier:
    • emmisive materials don't get proper transparency when I use a png texture with alpha channel in the color slot. I export all background plates as png with alpha channel for correct preview in viewport and I manually create black and white alpha channels for them, which I link to Transparency slot. I would like the transparency to be easily read from the alpha channel.
    • camera film size is not exposed in Rhino viewport settings. It is possible to circumvent this with calculating the relative focal length but I would appreciate a way to simply set the desired film size, to get correct FOV, vignetting and DOF.
    • When building materials I drag and drop every single texture to the correct slot. A way of importing multiple textures into respective slots would save me hours of manual labour. Similar features are available in blender as addon or in Rhino with native PBR materials. Ideally with a simple pseudo-regex matching to sort the textures correctly.
    • the tool to convert .vrscene into Rhino geometry and materials is great, but it fails on multimaterial and some more complex material setups.
    • A better control of export files from vfb. The way this is works in Corona vfb is perfect. For example I can set bit depth and whether alpha channel is saved.
    Here are some requests which are more about V-ray itself that would greatly benefit my workflow:
    • at the moment vrscenes are loaded on the render start and not updated during live rendering. This makes my layout workflow much more difficult as I basically need to restart rendering every 2 minutes. Interactive rendering becomes useful for lighting tests and view finding only. I would like the vrscene instances to be updated live, or alternatively to have an option to update their transforms without having to reload whole scene. With my workflow, instance transforms are the only thing that I need to update live.
    • If vrscene instances were supported in Vantage it would be also great!
    • talking about the vrscenes, when there are many instances at the end of rendering the Bitmap Buffer Cleanup step takes really long, which makes restarting the scene very cumbersome process. I know this is an issue which has been discussed already. I just wanted to touch on it.
    • Sometimes I have a site photograph where I need to render the building in. In such cases I make a very simple scene matching the camera and then I project the photo onto it. Rhino does not have a way to generate UV mapping via camera projection, so I'm using Environment - Screen mapping. This gives me nice reflections and GI on my building. However at the same time I get a strange ghosting effect on the building itself (where the texture should not be visible) as if there was a 10% opacity overlay of the projected texture over the entire image. Maybe there is something I'm doing wrong or the Environment-screen mapping works differently than I understand. In any case integrating a rendering into a projected photo is a common archviz scenario and I don't know how to best approach it with my workflow.
    ​Finally there are some ideas and thoughts of what would be great to have in Vray for Rhino in some future.
    • adaptive resolution zoom. When zooming in vfb the total preview resolution stays fixed but only a portion of the image is rendered. This feature exists in Blender. This way the user can get a good preview of a detail without having to render a tiny region of a huge image.
    • persistent export. A way to maintain as much of a scene as possible between render starts. This would speed up exporting and cleanup at the cost of higher memory consumption. This feature exists in Blender.
    • Better integration between vray scatter and RhinoNature. I use RhinoNature in most of my scenes. It is not perfect but has some key features like stacking masks, clustering, falloffs, per instance modification (very limited but it's there) and so on. The built in scatter is useful only for very basic situations. I don't know what are your plans on developing scatter, but my point of view is that there are a lot of features missing (some of which are already in RN). On the other hand RN has many rough edges and could be faster and better integrated, letting alone the infrequent updates. As an end user I would benefit from a single complete tool which is both feature complete and technically reliable.
    • Asset placement tool. Cosmos assets are great, and fast to use. What is missing is a intuitive placement tool. I could imagine this as a part of Cosmos, but also able to insert customs assets (from browser or blocks). I'm looking at solutions present in recent C4D and UnrealEngine. Dragging to scale and rotate, snapping to base geometry, asset painting...
    • Rhino has a great framework/library for simple scripts in python - rhinoscriptsyntax. It is all based on global/static functions, wraps many RhinoCommon features and super easy to use. A similar or compatible way of interacting with vray would be great. Especially for batch management of materials.
    I hope this makes sense and if there's anything I can elaborate on further or prepare sample scenes please let me know.​

    Robert

  • #2
    Hi, I'll try to address some of your concerns

    camera film size is not exposed in Rhino viewport settings. It is possible to circumvent this with calculating the relative focal length but I would appreciate a way to simply set the desired film size, to get correct FOV, vignetting and DOF.​
    The film width is fixed at 35mm. (36/24mm) This is the setting in Rhino as well, and as far as I know, there is no way to change that. That is the reason it is not exposed anywhere in the V-Ray UI - to get the Rhino viewport right.
    You can change the setting in V-Ray using script, but since a lot of other things are hardcoded based on the fixed film size. It is almost guaranteed that you will never get the DoF and vignetting right if you change that. You can share some more details on what is wrong with FOV, DOV and vignetting ting on your side and work it out

    the tool to convert .vrscene into Rhino geometry and materials is great, but it fails on multimaterial and some more complex material setups.​
    We know, it is under continuous development, new features and fixes are shipped with every release

    at the moment vrscenes are loaded on the render start and not updated during live rendering...
    This is not a good idea in principle. V-Ray does not monitor any file for changes being textures, meshes, scenes and anything else. The problem is that "monitoring" happens on OS level and is resource-intensive. This is especially problematic for network locations and web-resources.

    If vrscene instances were supported in Vantage it would be also great!
    You shall ask on the Vantage forum about that

    ... when there are many instances at the end of rendering the Bitmap Buffer Cleanup step takes really long...​
    This is a known issue. It has been looked in

    Sometimes I have a site photograph ...​
    I'd like to see an example with this problem.

    adaptive resolution zoom. When zooming in vfb the total preview resolution stays fixed but only a portion of the image is rendered.
    There is region render for that, as well as track mouse movement.

    persistent export...
    I'd like it too, but that is not possible in Rhino. Even moving an object, Rhino makes a brand new copy of the old object. Blender is a 3D modelling application. Cycles is the renderer in Blender. So it exists in Blender, and Cycles takes advantage. In order to work in V-Ray it must exists in Rhino in a first place

    Better integration between vray scatter and RhinoNature
    There is no integration between these two tools. Both RhinoNature and Scatter output transformations lists. And they are rendered exactly the same way in V-Ray. RhinoNature has much more sophisticated tools for fine-controlling the scattering. The Chaos Scatter is under active development and is getting more and more controls with the time. Mind that RhinoNature is a 3rd party tool, and we are neither competing nor matching exactly its functionality.

    Asset placement tool.
    if you're only referring to placing Cosmos assets, the only that could be improved is face/surface detection. and that is only if Rhino permits that in a easy way (I need to check that).
    "Dragging to scale and rotate" - That is a good point we can think about it.
    "snapping to base geometry" - I believe that is provided by Rhino already
    "asset painting." - an example would be good

    Rhino has a great framework/library for simple scripts in python - rhinoscriptsyntax...
    There is a V-Ray wrapper for .NET/python interoperability, it is called rhVRay. All functions that are currently available in the V-Ray for Rhino Plugin API are available thru it. You can check the scripting documentation and point to the "Rhino Python 2" examples
    Last edited by nikolay.bakalov; 09-05-2023, 03:21 AM.

    Comment


    • #3
      Hello Nikolay, thank you for your response!

      You can share some more details on what is wrong with FOV
      it is not that something is wrong with the camera, but what I would like to be able to do would be to replicate a real life camera setup. Input all of its parameters including film size and get comparable optics in rendering. I know it is possible to work around this, but it could be more convenient and straightforward.
      One feature that would be great in this area would be lens shift (not tilt). but again this is probably linked to Rhino viewport.

      We know, it is under continuous development, new features and fixes are shipped with every release
      Looking forward to this! I have most of my assets in .vrscene files. As soon as the importer is more stable I'll probably be batch converting them to .vrmesh.

      vrscenes are loaded on the render start and not updated during live rendering
      I don't mean tracking changes within the vrscene file itself but rather moving the vrscene object in rhino. Let's say I have a tree imported as vrscene, and when I move it around the scene while running interactive render it only renders in the new position when I restart the render.

      Sometimes I have a site photograph
      - I'll prepare a sample scene and start a new thread in Issues.

      There is region render for that, as well as track mouse movement.
      what I mean is different than region. In 3ds Max its called "2D Pan and Zoom". I would like to zoom and pan in 2D within the VfB but with fixed apparent resolution. Here's a post on Corona forum about this feature https://forum.corona-renderer.com/in...?topic=21669.0
      The point is you could have a small resolution interactive rendering and be able to zoom in to see details.

      The Chaos Scatter is under active development
      I understand, I hope there are more features coming to Chaos Scatter. From Rhino user's perspective there is still no ideal way to do scattering. What would make Chaos Scatter a lot more useful would be the ability to scatter vrscene files, and manual mode for granular control of the instances.

      "asset painting." - an example would be good
      About the asset placement, here is a video from Maxon with features that exist in this area in C4D. https://youtu.be/6faVon1IN0Q There's a whole range of solutions they have

      My point here is that scene layout in Rhino is a really laborious and a bit old fashioned process. So anything that would make it easier would be a great benefit. Especially that Rhino + V-Ray setup is very close to being a full fledged solution for archviz.
      Thank you again!​

      Comment


      • #4
        Hi,
        let me do a second run on your response

        it is not that something is wrong with the camera, but what I would like to be able to do would be to replicate a real life camera setup. Input all of its parameters including film size and get comparable optics in rendering. I know it is possible to work around this, but it could be more convenient and straightforward....
        There are 3 "camera" plugins in V-Ray. RenderView, SettingsCamera and CameraPhysical. They all work in a group to produce the output. Most of what you need is in CameraPhysical. Everything there can be set using a script. Even film_size and lens_shift. However the sheer amount of parameters to deal with is enormous - 90+ different parameters in total. And most of them are quire obscure for the average user. That's why they are not exposed in the UI. Furthermore there is a overlapping between the parameters of CameraPhysical and SettingsCamera.
        So if you're camera expert and know what you're doing, you can set every parameter to whatever you like (consult V-Ray docs for parameter values). However when you change any aspect of the camera V-Ray will overwrite some of the parameters with what's in the Rhino camera.
        I can provide a list of what is available and what will get overwritten by the Rhino camera if you like

        I don't mean tracking changes within the vrscene file itself but rather moving the vrscene object in rhino...
        I understand, unfortunately this is a limitation, that is not yet resolved (despite several attempts). VRScene files are not really interactive-friendly by design.

        what I mean is different than region....
        The 2D pan & zoom feature is not supported in V-Ray for Max. Hence it is not available in the VFB either. I need to bring that to the PM for consideration.

        About the asset placement...
        In Rhino we do a simple placing with a point and world Z axis, because this is how Rhino places blocks. There is OrientOnSrf and FlowAlongSrf commands in Rhino which you can use for more advanced placing, but I don't see a point reinventing the wheel and provide something that is already available in Rhino.

        Comment


        • #5
          Thank you!

          The vrscene interactive transformation would be a great step forward for my workflow. I hope to see it implemented at some point!

          For the advanced camera settings I did some experiments on it and horizontal_shift works great! It does exactly what I need. There's no preview in Rhino, but I can live with that. However when I'm changing certain settings like "film_width" "focal_length" "distortion" "optical_vignetting" they only work as long as I run the Transaction during interactive render. When I run production render they are overwritten with Rhino or default values. Is there a way to have them apply to production render?​

          Comment

          Working...
          X