Announcement

Collapse
No announcement yet.

GPU Hash Map issues

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GPU Hash Map issues

    We were in the middle of a project that has some intense night scenes when the Vray Next Update 2 came out. This seemed like a godsend, as our scenes had random flicker from some lights and not others. We had been in the process of narrowing it down to which lights seemed to have issues when we saw that hash maps had been added for GPU rendering and they reduced flicker. Perfect, we thought. However... now I can't seem to bake a light cache at all. It gets all the way through and then gives a very non-descriptive "Fatal Rendering Error" and doesn't save the cache.

    Some background on the scene itself - it's many millions of polys (over 100 million when everything is set to "Maya Model" and not "preview" or "bounding box"), most of them are Vray Proxies and referenced in. When I say it's intense, I mean we have multiple thousands of lights or light materials. Again, most of these are instanced Vray Proxies. But we have a large refinery scene, at night, so we have little tiny lights on the buildings, street lights, large projector style lights highlighting the main area of interest, and a "sun" set to be more like a moon with a dome light - I've attached an example image, pre post work. Before this update, we were able to experiment, figured out that caching out the light cache at a decently high amount (12000 subdivs and 0.07 sample size, high depth, retrace, etc) and using sufficiently high settings (16 min, 64 max, 0.02 or 0.01 Threshold) got us acceptable render times and limited flicker. We were looking at 30-45 minutes per frame on our high end machines. Not amazing, but doable. Saving out a light cache allowed us to use the high settings without the per frame time hit.

    We can use CPU to render, flicker free, but we are looking at 3.5-4 hours per frame with a light cache/IR map setting. Not especially doable. Definitely worst case scenario.

    We were able to render using the KD Tree method, but now we are stuck. Hash Map is working great in our interior and Day scenes (max 15 lights in those) but seems to just... choke at the end with these scene. And DON'T get me started on the "Translating Geometry" issue between the day and night scenes (spoiler, it's much, much, much SLOWER in the Day scenes of all things). Considering a light cache takes 2.5 hours on our threadripper 16 core, 128GB, RTX 2080 ti, etc etc computers, testing this has been a nightmare. Having it die at the very end of that, when we can see the light cache that looks finished but won't merge and save.... gah.

    We would love any and all help, as we are at our wits end. I'm at the point where I think I have to start turning off groups of lights but my boss is beyond pissed because this is due entirely too soon for comfort and we have other clients that also need renders. Chaos Group, we have sent over an email to support@chaosgroup.com in anticipation of getting some help. This is copied in that email (except for the images) and we'll mention this post. The archive of the scene itself is rather large, as one might expect. Thank you all in advance, hopefully people smarter than we are can tell us what we are doing wrong!

  • #2
    Hi and happy new year.
    I'm not part of Chaos in any way but do deal and use these types of scenes on a daily basis with my own clients. It looks like to me from the face of it that your scenes lighting approach could be simplified allot for what they are. The look you are going for seems low to middle end (no offence, I know how budgets go etc) and as such why do you need so many lights? I would be looking at much more simple ways to light this scene using a combination of textures, self illum shaders, proxies for the intricate models and some simple lighting set up. You could bring this scene down in terms of render time by a huge amount or you could of dragged this scene into unreal, got a better look and not even worried about the rendering, a longer initial set up which wont be good for your current timeline but food for thought in the future . With big scenes sometimes you've got to think like an old school game engine artist/team and really think about how to keep the render times down way before you start to populate a thousand lights etc . Your translating geo issues wont happen as much if you proxy the right geo .

    Comment


    • #3
      shegmann Did you also include the scene in your support ticket? Even if it's large it will help a lot to be able to debug and profile what's going on - 2.5 hours for the light cache seems totally extreme but without having the scene it's hard to know why it happens. Same for the translation of the geometry.

      Best regards,
      Vlado
      I only act like I know everything, Rogers.

      Comment


      • #4
        I see you added the scene to the ticket, thanks a lot! Will look into it.

        Best regards,
        Vlado
        I only act like I know everything, Rogers.

        Comment


        • #5
          Thank you, Vlado! We appreciate all your help.

          And yes.... we are aware the scene is rather large - this refinery was larger than previous ones by a significant amount. There are probably a lot of things that we rushed and did wrong and we love feedback. The majority of the refinery is done via vray proxies that are imported and then instanced, though we do have plans to simplify much of it in regards to textures for the next project. The portions that were proxies were because they had to be custom laid out according to the actual refinery (such as the pipe racks) or they had to interact with the explosion.

          The night portion of this animation is new to us - part of this incident occurred during the evening, so we wanted to be faithful to that. We used lights far off into the distance to give that sense of space. The fact that subdivisions nearby were affected by the explosion meant that we wanted to make sure it felt like houses were there. We tried both self illumination and vray light materials but they didn't give the visible fall off that we needed, even with "direct illumination" checked. For a lot of the tiny lights on the refinery pieces themselves, that's what we used. There are only a couple of key lights on the towers, tank farm, and such that are actual lights. The street lights were the main issue in regards to the fall off feel - they needed to cast a great deal of light to feel "real" (or at least real enough). We tried an invisible piece of geometry with a light material on it but it just didn't work out. So for the closer and more visible street lights, we used an instanced vray light to cast the look we wanted. The far off lights are still just the light material so they wink in the background.

          We were trying an adaptive limit of 500 lights for the cache and then bringing that down for the actual render. No idea if that actually helped but it seemed to reduce the blinking of the far away light materials so we went with it.

          Prior to the Vray update, our light cache was taking about an hour. It sped up each frame considerably and we weren't getting any real translation hang time. (As a note, the daytime scenes take very little time to cache but, despite being referenced from the same original proxies, much longer time to translate. We're looking into that). It wasn't until after the update that the light cache went up to 2.5 hours (since the way it was done changed, some differences were expected) but that still wasn't a huge deal to us. If it worked. The biggest issue we are having is that we can't actually get the light cache out to even test to see if it works since it has a Fatal Render Error at the end of the cache.

          Unreal isn't an option for us at this time, Strangebox, thank you for your tip though. We're considering Unreal or Unity for a different project, but it hasn't been feasible for this client. We try to keep our render times under 10 minutes for the majority of our work. We knew the few shots that had this night set up would be longer, but we were getting 30-40 minutes per frame (sometimes as low as 20) which wasn't totally horrible. Hopefully we can get the caching issue worked out and figure out what we're doing wrong! And we'll continue to look into why we are getting such vastly different geometry translation times between our day and night scenes. Thanks!
          Last edited by shegmann; 03-01-2020, 10:31 AM.

          Comment


          • #6
            The major problem at the moment for us is that no matter how many subdivisions we throw at the light cache, we are getting flickers using the Hash Map/CUDA process. We've tried an IR/Light Cache on the CPU side and it's vastly more time consuming than the original render times we were getting before Vray Next updated. We're attempting to get the times down on the CPU rendering but it's looking like we're going to have to eat the time.

            The fact that the CUDA rendering times have ballooned without actually fixing the flicker is... concerning for us in the long run. 20-40 minute render times going up to 1h 30m - 1h 50m render times while STILL having flicker areas that move around is not acceptable for our pipeline. CPU times using both Brute Force/Light Cache and IR/LC are around 3h - 3h 30m per frame. CPU looks great as far as being (mostly) flicker free but the render times are... oof.

            I've included an image of the type of garbled areas CUDA is now giving us using the Hash map pipeline. And a GIF so you can see how they move between one frame and the next. Denoise somewhat helps, but the "smoothed" regions still flicker and move around noticeably.

            Baking this lighting is unfortunately not feasible for us using the Maya 2019 pipeline. We'd accept the hour+ render times if we could get rid of the flicker. At this point, we might have to just accept the 3 hour+ but that will significantly impact our other jobs and it is by no means ideal. We're looking at cheats like changing camera moves and rendering with overscan so we can cheat camera moves, etc. Quite frankly, it's ugly and we'd rather not do that but we can't tie up our farm for that long without negatively impact all of our deadlines.

            Any and all tips would be greatly appreciated. We're still not sure why the Vray update killed our pipeline the way it did. We'd be happy to adjust to a new one if we knew what to change in our settings but we're just at a loss. At this point, we're considering trying to roll back and at least get the times and results from prior to this change, if that's feasible.

            (A note, this is just a different camera from the same scene we submitted previously.)

            Comment


            • #7
              We ended up going with CPU rendering using IR and Light Cache for our GI. Saving out those passes at very high levels still doesn't completely clear up the small Light Material flickers but gives more acceptable render times at lower settings. They would be completely gone if we could render out at 16 min samples or higher but that becomes time prohibitive. And we aren't getting those strange boxes in varying areas in the frame as we were using CUDA rendering.So that's our work around for this project. Hopefully we'll be able to go back to CUDA rendering for our future work. We'll keep testing!

              Comment


              • #8
                Update: We sent off a scene to a render farm for some GPU testing (with a fresh Hash Light Cache). They used 162 GPUs (V100s) across 27 nodes to render out these 24 frames. The interesting thing that happened was that instead of the moving buckets that we are getting with our farm, they have areas of high noise and smooth rendering (smooth rendering is the expectation) but they are obviously more stationary. We.. have no idea why these night scenes are doing this now. I did not confirm which VRay version they are using, other than VRay Next. Assuming since they just spun this up (and it didn't error out with a Hash Light Cache), it's at least Update 2 if not the latest update.

                We are considering a GPU test using Full Evaluation rather than Adaptive to see if the noise clears up. It's weird that some areas are perfect and some look like a super low res Light Cache was used.

                vlado - if you would like the updated scene archive let me now. I can send you an updated link to it. Just wanted to record our continued progress.

                Comment


                • #9
                  Have you tried light cache and brute force? LC around 2500 subdivs. you dont need to precalculate anything and i'd expect these scenes to take an hour a frame on a modest ryzen.

                  18000 LC subdivisions is wild, you're already past the point of something being wrong if you've gone that high.

                  Comment


                  • #10
                    Yeah, we went that high when we were trying to get our little tiny lights on the factory and far off street lights to stop disappearing/reappearing randomly. When we are rendering CPU, we are using IR Primary and LC secondary for all but one scene that use did Brute Force Primary and LC Secondary. With that, we are getting about 1.5 hours to 2 hours per frame. CPU Light Caches were definitely lower than the GPU ones (closer to 7000-9000 subdivs depending on the scene).

                    The thing is... going that high on the LC with CUDA fixed our winking lights. By calculating that for the camera path and saving it out, it really only added prep time before the scene was queued but the frames were what we needed. What I just posted about was the fact that CUDA is now giving us that weird mix of super grainy and super smooth in our night scenes.

                    The day scenes on GPU are fine and going quite quickly with a much lower light cache settings (around 2500 subdivs and about 5 minutes per frame). Everything worked as intended.

                    We plan on testing a CUDA night scene with Full Eval but haven't gotten there yet. That test takes... quite a bit of time to set up and get results from. I might run it over the weekend.

                    EDIT:
                    And if there is a better way to set up to give the kind of light presence a refinery at night needs that doesn't involve lots of actual lights/light materials, we are more than open to it. It's the first time we've tried to really go for that look with this number of lights (and in a refinery that is 5x larger than any we've tackled before). So I'm sure we aren't optimized at all.
                    Last edited by shegmann; 30-01-2020, 12:23 PM.

                    Comment


                    • #11
                      Originally posted by shegmann View Post
                      And if there is a better way to set up to give the kind of light presence a refinery at night needs that doesn't involve lots of actual lights/light materials, we are more than open to it. It's the first time we've tried to really go for that look with this number of lights (and in a refinery that is 5x larger than any we've tackled before). So I'm sure we aren't optimized at all.
                      well i cant tell you if it would work for your scene, but years ago when computers were a LOT less powerful, i did a 50km square city at night.

                      i used IRMAP/LC with precaclulation if the irrandiance map done on 10+machines.. setting the maxrate to 0 (or maybe even 1) oversampled the lighting and picked up all the detail in the lights.

                      the lights were done with shaped polygons, duplicated under each lamp post bulb, much larger than realistic, to allow them to be picked up more easily by the renderer (key to avoid issues, the larger u can make the planes the better, but you sacrifice lighting accuracy.) then hidden to camera. simple light materials, no direct illumination.

                      for more distant stuff we used quite thick renderable splines along the roads, at height of streetlamps, with a gradient ramp mapped lightmtl along them giving blobs of light spaced equally. - again hidden from camera.


                      iirc we used no actual lights. in your case id use them for the closer stuff.. ours was all shot from a distance overhead.


                      we left lightcache disabled for final rendering, but now with the "lightcache for glossy rays" u might want to load that as well as the irmap.

                      it worked fine.. irradiance map was huge and took a few hours to compute, but rendering was fast, flicker free and usable on 2gb ram.

                      Comment


                      • #12
                        ha.. found the project in my archive.. bear in mind this was done in 2009, on old hardware and is only 720p... but the method should still be valid at higher resolutions. might even be worth trying with brute force instead of irradiance map.. its a looot faster these days!

                        Click image for larger version

Name:	night.jpg
Views:	366
Size:	852.1 KB
ID:	1059804
                        Attached Files

                        Comment


                        • #13
                          super gnu Thank you for the detailed suggestion! All of our little refinery lights are light materials, as are the street light bulbs. But the street light that casts are VRay sphere lights - we tried using a geo shape with a light material on it for those but they cast a shadow because when "Casts Shadows" was turned off the light wouldn't show up for that geometry. It made us scratch our heads and we just finally give up and used a light instead. I think we were trying to use direct illum though, if I remember right. We weren't getting lighting correctly on all the metal without it.

                          I love the idea of the spline with a texture on it for distance. That makes so much more sense than the individual pieces of geometry with a light material we have on now. Those little globes get practically subpixel size in the distance.

                          I should try setting the IR map to 1, we were trying 0 and yes that definitely took hours to work. That was when we switched from CUDA to CPU.

                          We very much appreciate your suggestions and sharing your project with us!

                          Comment


                          • #14
                            yeah i think the key for this method is to make the emissive geometry as big as you can get away with before the effect starts to look wierd. it gives the raytracer something to grab on to! for example under a street lamp head, have maybe a 1mx 1m + rectangle pointing down.. you can play with changing the shape and angle to sculpt the shape of the hotspot on the geometry below. itll be hidden to render/refs so you can push the size quite a lot. i would avoid spherical lights and real lights in general when there are so many in the scene.


                            i would be interested how this method works these days with brute force on gpu.. id imagine possibly quicker than the imap method (it was the only option back then really)

                            Comment


                            • #15
                              Originally posted by shegmann View Post
                              super gnu if I remember right. We weren't getting lighting correctly on all the metal without it.

                              that might be an issue with this method. our scene didnt have all the shiny stuff in it yours does.. but i bet you could get close enough to use for 90% of your lights, and keep the real ones for specific areas.

                              Comment

                              Working...
                              X