Announcement

Collapse
No announcement yet.

Feature Suggestion - Bucket size override for slower materials

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Feature Suggestion - Bucket size override for slower materials

    Hi guys,

    watching a slow render, I had a thought. I do architectural scenes, and I know in advance that in most of my scenes, it will finish with some buckets taking a long time. Dynamic splitting is good, but not nearly enough in most of my scenes. (using DR with about 50 cores)

    Most architectural scenes I do have a lot of fast areas of the frame (ceiling, bare walls etc) but usually a few killer items that are always left using only a fraction of the render farm at the end of a frame.

    Would it be possible to have in a material/object an override that might tell VRAY to both prioritise this and/or subdivide the set bucket size by 2 or 4?

    So tough materials that I know will be really slow, if I could set to subdivide further, it would really help. On DR rendering on a high res still, I know I could shave off 20% of the time if I could tell VRAY to start with the glass chandelier...

  • #2
    Hi,

    I will send your request to our developers for consideration.

    Actually I see that we have a similar request already added to our system and I'm going to update it.
    Thank you very much for your feedback.
    Tashko Zashev | chaos.com
    Chaos Support Representative | contact us

    Comment


    • #3
      there are smarter ways
      -use the GI sample density to deifne the bucket size - many samples in one area = small buckets
      -or poligon density
      __________________________________
      - moste powerfull Render farm in world -
      RebusFarm --> 1450 nodes ! --> 2.900 CPU !! --> 20.000 cores !!!
      just 2,9 to 1.2 cent per GHZ hour --> www.rebusfarm.net

      Comment


      • #4
        I'd like to see the dynamic splitting extended such that if there are enough free cores, an already rendering block would be subdivided and processed by other cores (simultaneously, using whichever core(s) finish(es) the block first, whether it is the original single core working on the large block, or the two or four blocks that it gets divided into). This could be used for Distributed Rendering (where it would actually be much more useful) as well. Even if the remaining blocks are not dynamically split I would like to see available cores from available machines put towards the same bucket, taking the result from whichever machine finishes it first.

        The situation I run into a lot is that, when using DR for test renders, I sometimes have to either disable slower render nodes, or use VERY small bucket sizes (<16) that prove inefficient in general. Otherwise I end up with a frame sitting waiting on one or two buckets being rendered by the slowest node in the bunch (the dreaded "Last Block Syndrome"). Instead all the other nodes which are still sitting idle should all compete to finish those blocks as quickly as possible... Basically, no machine, and ideally no core, should EVER be idle until the frame is done.

        Comment


        • #5
          Like Tashko mentioned above - we already have this into our system and it will be developed for the future versions.
          However until then the best approach would be to use smaller-bucket-size, it is true that it leads to inefficiency but it is still better than excluding slower machines from the rendering or having 1 or 2 buckets working while all others doing nothing.
          Svetlozar Draganov | Senior Manager 3D Support | contact us
          Chaos & Enscape & Cylindo are now one!

          Comment

          Working...
          X