Announcement

Collapse
No announcement yet.

I think I am done with GPU

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • MANUEL_MOUSIOL
    replied
    The thing is, I'm not talking about vram problems exclusively, although obviously it's difficult. But my scenes right now always fit into my now nvlinked rtx 2080ti. Sure, of a scene doesn't and vray doesn't like it, nothing works anymore and that sucks. The workaround with isn't DR with only the local machine is very slow when it comes to projects with lots of textures and geometry.
    it's the instability and crashes I'm really angered with.
    i don't have money for a render farm right now btw. So GPU was the best option for speed.

    On a side note, I converted an interior archviz scene to Corona and started the IPR. It was so slow, I think it was even slowing down my pc, that I stopped it. It was my first attempt to Corona though, so it probably doesn't say much. But that's for another thread and totally different forum...

    Leave a comment:


  • danio
    replied
    Following up on my post from this morning, I just converted the 'this one is for sure going to be Next GPU' scene to Corona Ran out of VRam, ran into random Cuda 700 errors, and also ended up liking the Corona images better. I do have an NVlink on the way for future projects. I'm really curious about the Optix 7 build.

    All that said, my journey back to Corona was super smooth. The Corona converter works so well it's crazy. I think the Vray converter could use an update. Not sure why, but the translucency and refraction also always sells me more with Corona, too. I've been dragged over the coals for saying that but I see it time and again. I did a blind sample with my wife and she picked the Corona images 3 for 3. Same LUT, same everything. I think the noise pattern with Corona is also more believable. Less weird shoji screen haze, more expected noise.

    I could actually see a workflow where I do a lot of Forest Pack work for scene setup with VRay Next since it is so much faster with iToo and IPR, and then convert over to Corona for final production.

    Anyway, my personal experience from the last couple of days that seems somehow related to this thread's tangent.

    Leave a comment:


  • squintnic
    replied
    Yep we are about to reinvest in CPU power for our arch vis farm which has currently got 2.9Thz of capacity and the majority of nodes with 128Gb RAM
    GPU still way too dangerous to use for arch vis / infrastructure vis (90% of our business) given the scale of scenes are often huge.

    The small scenes where it fits into VRAM GPU be really fast but often these scenes get added together to make large long and heavy sequences that could be comped but brute forcem often ends up cheaper in terms of person hours vs compositing GPU passes

    Leave a comment:


  • Morbid Angel
    replied
    That's the thing though Manuel, how do you know when you start a project that it will fit into the vram? You almost always never know. Unless you do some isolated product renders or whatever. If you commit to a gpu farm, then need to upgrade it then what? I have a farm I've upgraded it 3 times over past 5 years. I just went out and bought 2 TB of DDR3 ram spread across the farm its 96 GB per node, and some got 128 GB. All because last several projects I had hard time rendering with 48 GB of ram! If I ever went the gpu route it would doom project after project for me, I would go under as a vfx vendor by now.

    Leave a comment:


  • MANUEL_MOUSIOL
    replied
    Originally posted by JD3D_CGI View Post
    But, like I said is the sentence you didn't quote- it's a matter of expertise and scene requirements.

    If you know straight up that you are going to need CPU, then do it that way.
    I agree, now more than ever.
    but I thought (and still think this would be valid) that as long as the scene fits into vram, everything should work as in CPU, when it comes to stability.
    But as stated in the interesting and very true article from the Corona guys, even the drivers make things very unstable, so it's a constant update battle.
    If gpu rendering would just get slow with , I big scenes I could cope with it, as CPU isn't fast per se anyway. But the frequent crashes and bugs just bum me out. And I dont think that's about my technical knowledge in the end, but rather "bad coding" to say it bluntly.
    Last edited by MANUEL_MOUSIOL; 03-09-2019, 01:34 PM.

    Leave a comment:


  • JD3D_CGI
    replied
    Originally posted by MANUEL_MOUSIOL View Post

    that sounds problematic though, as appearance can vastly differ between the engines. Bump seems to work differently etc.
    Obviously some fiddling is involved... but like I said, I work in one until I hit a wall. GPU rendering times are profoundly faster when they are working correctly so obviously you want to be aiming for that.

    But, like I said is the sentence you didn't quote- it's a matter of expertise and scene requirements.

    If you know straight up that you are going to need CPU, then do it that way.

    Leave a comment:


  • danio
    replied
    Good article, Bobby. I just kicked off a new project yesterday and actually am doing it in Next GPU. As much as I love Corona, I’ve been finding Next is a lot faster with Forest Pack. This scene will be heavy with vegetation so it’s less about render time and more about maintaining a high viewport framerate with IPR running. Render time is a tiny consideration compared to production time, really. Hoping the Corona team is able to speed up the performance with Forest Pack in V5. One more consideration to add to the pile!

    Now if only I could find a 2 slot nvlink in stock...

    Leave a comment:


  • glorybound
    replied
    I think Corona explained it well.

    By rendering only on the CPU we avoid all bottlenecks, problems, and limitations of GPU rendering, which include the unsuitability of GPU architectures for full GI, limited memory, limited support for third party plugins and maps, unpredictability, the need for specialist knowledge or hardware to add nodes, high cost, high heat and noise, and limited availability of render farms. Read our in-depth look at the advantages of CPU-based rendering.

    Leave a comment:


  • MANUEL_MOUSIOL
    replied
    Originally posted by JD3D_CGI View Post
    At the end of the day, it totally depends on the requirements of the scene and you need some expertise as an artist to decide when the best case scenarios apply.

    I tend to start in VRAY GPU these days and stay there until I need to use features that don't work... and then I switch over to CPU.
    that sounds problematic though, as appearance can vastly differ between the engines. Bump seems to work differently etc.

    Leave a comment:


  • JD3D_CGI
    replied
    At the end of the day, it totally depends on the requirements of the scene and you need some expertise as an artist to decide when the best case scenarios apply.

    I tend to start in VRAY GPU these days and stay there until I need to use features that don't work... and then I switch over to CPU.

    Leave a comment:


  • kosso_olli
    replied
    Originally posted by squintnic View Post
    vray GPU in my experience is only good for product and automotive, anything remotely complex and architectural and it falls over.
    These automotive jobs are more complex then one might think. Ever had 50 million polies in a scene with 4500 unique objects, about 100 different materials for a shot which needs a render resolution of 12000x8800 pixels, in camera depth of field for the whole shot, loads of indirect light for GI, displacement for floor carpets and stuff? I say the typical car interior shot is one of the hardest things to render in terms of needed features and hardware!
    A scene like this is actually a benchmark for me. And I have seen lots of render engines fail on something like this!
    Last edited by kosso_olli; 30-08-2019, 07:59 AM.

    Leave a comment:


  • MANUEL_MOUSIOL
    replied
    Another great discussion here - I am glad, I posted about my anger.

    svetlozar.draganov I am trying to send as many bug reports with scenes as possible. But when you are in the middle of production and deadline is next morning, it is difficult to do so - generally it takes a lot of time to wrap everything together, describe the bug and send it to you guys; so during a busy production sometimes it is not possible.
    The point is that I am sending a lot more to support than before and I get crazy instabilities, like the one I was talking about. And funny enough CPU mode didnt have any problem. Kristina is looking into it right now - so we will see what the problem is.
    I am just getting fearful of doing work with GPU, and it is not a good thing to work in fear

    I think from you guys are saying, it is interesting, that one would switch render engines for different purposes. I was a vray guy all along and even though some features are missing in GPU and some things work and look differently, I never saw such a big difference between the two engines and just switched because of the speed. But using GPU for product shots and vray or corona for archviz is a interesting approach. I just wish that materials and features would transition seamlessly so that with just a click of a button I can switch and keep working with all my materials and assets still working. Taking archviz to Corona is kind of scary (maybe because I havent tested it out) because everything until now was prepared for vray... But the caustics are very tempting and I figure it will take vlado
    a good time to code his own caustics into vray...

    Leave a comment:


  • danio
    replied
    To the OP: You'll find Corona less crash prone than Next GPU would be my guess. I use it all the time, especially on time sensitive or challenging scenes where I'm worried about delivering on time and I can't lose a day to 'troubleshooting.' There's no requirement for the walls to be white or the style to be Scandinavian The denoiser is also top notch, the new caustics are nice and the memory management is also better in the latest dailies.

    Next GPU tends to throw curveballs at me mid-project as you describe. That said, when Next GPU works it's really nice for us and getting better all the time! To make things work, though, you have to keep things pretty simple and always be on the lookout for warning signs (just my experience.) FStorm is amazing at preserving bump and displacement details and is more stable than Next GPU but lacks a denoiser and a few other key features. Maybe if it had all the features Next GPU has it might be less stable, who knows, I'm not a developer.

    If we're talking vehicle analogies here's mine:

    -Corona is the reliable Toyota 4Runner that isn't super sexy but is fast enough and will get you where you need to go every single time and never breaks down.
    -Next GPU is your buddy's hot-rodded Camaro that's cool when it works, but you don't want to take it too far from home because you might end up standing by the side of the road staring under the hood (ie: scrolling through a hard to read Log window that's remarkably verbose and yet has short term memory loss...)
    -FStorm is like some exotic Italian race car that impresses your friends but doesn't have seatbelts or space for passengers in the back.

    That's my (totally flawed) analogy I use all of the renderers mentioned above. Corona 70% of the time, Next GPU 20% of the time and FStorm 10%. I do Archviz stills and animations and some product stuff lately.
    Last edited by danio; 29-08-2019, 09:43 PM.

    Leave a comment:


  • Morbid Angel
    replied
    Well imho the gpu is never going to be stable and reliable because we are trying to use it for something it was never meant to be used for. Gpu - graphics processing unit is used for games and made for games. Sure you can do other things on it, when nvidia releasing source code for cuda for example. But, its like trying to drive a ferrari on an off-road terrain. Game developers spend years crafting their engines to work on the gpu, making sure the game is stable and runs well on most common gpus. I highly doubt rendering on gpu will ever be what cpu counter part is now. You have the whole multi core cpu thing going on where not so expensive cpu's are as fast as some of the not so expensive gpu's but yet they are much more versatile and reliable. You have the whole ram limitation (yes out of core thing) but when you add things up the speed of the gpu starts to diminish. Sorry that's just my imho, but every time I tried gpu almost immediately I ran into a wall, road block whatever you want to call it and I just could never justify that for the extra speed it offers. There are certain things gpu will never be able to do faster then cpu, for example texture loading from network, proxy loading from network, or various IO things that depend on gpu reading from something other then its own ram.

    Leave a comment:


  • glorybound
    replied
    I just did an animation using Corona. Just camera paths, but no issues. I agree with your only white comments. Sometimes I think Corona is mainly used by the hobbiest doing Scandinavia scenes

    Leave a comment:

Working...