If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
New! You can now log in to the forums with your chaos.com account as well as your forum account.
The issue with opacity is that it's not accelerated by the RT cores.
When a ray hits an opacity mapped leaf, we have to sample the opacity texture to figure out if there is an actual leaf there or not.
And this texture fetch is not accelerated by the RT core.
Even a single ray may have to go through many such intersections until it hits a part of a leaf that's not cut out by the texture.
We recommend using actual geometry, where the leaf is made from a larger number of polygons with the correct shape.
The issue with opacity is that it's not accelerated by the RT cores.
When a ray hits an opacity mapped leaf, we have to sample the opacity texture to figure out if there is an actual leaf there or not.
And this texture fetch is not accelerated by the RT core.
Even a single ray may have to go through many such intersections until it hits a part of a leaf that's not cut out by the texture.
We recommend using actual geometry, where the leaf is made from a larger number of polygons with the correct shape.
Greetings,
Vladimir Nedev
Hi, Vladimir.
Oh =) If every leaf will be a geometry object... Will Lavina handle so many polygons if I create a forest with these HQ trees on my 2x2080TI with NVLink?
Oh =) If every leaf will be a geometry object... Will Lavina handle so many polygons if I create a forest with these HQ trees on my 2x2080TI with NVLink?
Best regards,
Andrew.
Without a problem.
If it was that easy, it would have already been done
Peter Matanov Chaos
And, yes, we believe that having more geometry in the leafs will be faster than using opacity maps.
We will probably have to confirm this in the future with some actual tests.
More geometry means slower rendering however, there is no way around this.
For best performance, you will want to keep the triangle count as low as possible.
Of course, if performance is already satisfactory on your scenes, you can leave them unchanged.
The fully ray traced nature of Lavina, makes it very efficient at handling many instances (like trees) where each instance has lots of geometry.
A rasterizer (like 3dsMax's or Maya's DirectX/OpenGL viewports or the Unreal/Unity game engines) requires a lot of complex optimizations to handle this since each instance of a triangle has to be iterated over each frame,
even if the triangle doesn't contribute a single pixel on the screen.
These optimizations include geometry with different level of detail based on distance, and culling of whole instances based on whether they are within the perspective frustum.
These optimizations still break for certain scenarios, and that's why environments are often represented as bounding boxes.
hm, i like very much hwat i tested in lavina so far, (love it, great start!)
but not planing to add opacity maps is a problem for most archviz scenes. we want to export scenes that we also use in vray for offline rendering then. not being able to use opacity map cutouts on materials might be some trouble. i hope you will find a solution for that. i hope even if slower it can be added as option at a later point,
And, yes, we believe that having more geometry in the leafs will be faster than using opacity maps.
We will probably have to confirm this in the future with some actual tests.
More geometry means slower rendering however, there is no way around this.
For best performance, you will want to keep the triangle count as low as possible.
Of course, if performance is already satisfactory on your scenes, you can leave them unchanged.
The fully ray traced nature of Lavina, makes it very efficient at handling many instances (like trees) where each instance has lots of geometry.
A rasterizer (like 3dsMax's or Maya's DirectX/OpenGL viewports or the Unreal/Unity game engines) requires a lot of complex optimizations to handle this since each instance of a triangle has to be iterated over each frame,
even if the triangle doesn't contribute a single pixel on the screen.
These optimizations include geometry with different level of detail based on distance, and culling of whole instances based on whether they are within the perspective frustum.
These optimizations still break for certain scenarios, and that's why environments are often represented as bounding boxes.
Greetings,
Vladimir Nedev
Ok. I will test the trees with leaf geometry =)
Today I tested 135 million polys =) One big cliffs geometry from World Creator =) 3dsmax have 60 - 90 FPS with this geometry. I tried to export it to .vrscene and then opened in Lavina =) And had a crash.. So the question is - how many polys or tris we can upload to Lavina? =)
How did you pack 135mil poly into 3GB?
A 25mil poly terrain is 1.5GB as vrmesh (identical to vrscene for size.).
Are you sure that file is fine?
F.e., does it load back into max?
Today I tested 135 million polys =) One big cliffs geometry from World Creator =) 3dsmax have 60 - 90 FPS with this geometry. I tried to export it to .vrscene and then opened in Lavina =) And had a crash.. So the question is - how many polys or tris we can upload to Lavina? =)
The crash happens because we don't handle out of memory conditions for now.
You will be able to fit less triangles compared to a rasterizer, because in addition to vertex positions, normals and UVs, a ray tracer has to store the triangle acceleration structure as well.
How many exactly, depends on your free GPU memory (which can be taken by other applications, including 3ds Max running in the background) as well as the mesh topology.
I would expect roughly 1GB per 15 million triangles, based on some quick tests, but it could be less or more.
Slightly less than half of this size is the acceleration structure.
Note that initially the acceleration structure is built uncompacted and takes roughly twice as much memory, so the overall memory requirement while loading is ~50% more than the final value.
So around 1.5GB per 15 million triangles while loading.
There is also some additional scratch memory that has to be allocated while building the acceleration structure.
This is done by RTX and I haven't tested how much it is.
If you have more smaller meshes, instead of one huge one, the memory spike caused by this scratch buffer will be significantly less.
I will log an issue to improve the loading of very large meshes, like your terrain example.
We would need to split the mesh into smaller ones and make sure that the acceleration structure for each one is compacted before we start building the next one.
This will reduce the memory spike while loading, and hopefully get you ~15 million triangles per 1 GB.
hm, i like very much hwat i tested in lavina so far, (love it, great start!)
but not planing to add opacity maps is a problem for most archviz scenes. we want to export scenes that we also use in vray for offline rendering then. not being able to use opacity map cutouts on materials might be some trouble. i hope you will find a solution for that. i hope even if slower it can be added as option at a later point,
From what I've heard from Vlado, V-Ray would also render faster if you use actual geometry for the leafs instead of using opacity maps.
"From what I've heard from Vlado, V-Ray would also render faster if you use actual geometry for the leafs instead of using opacity maps."
yes a bit, but since vray has the stochastic opacity map this is neglectical, and most trees available use that, and also many architectural materials like metal meshes do need opacity maps.
you cannot model a million wholes in a metal mesh sheet p.e
so if that tool is meant to be also used for architectural rendering, support for opacity woudl be very important, even if this is a small speedhi t(maybe with the way vray uses it, stochatic mode etc)
[I]you cannot model a million wholes in a metal mesh sheet p.e
Technically, you can.
Just as you can model individual leaves on trees, and exact, complex cutout shapes.
It's laborious, and it requires one to use the right tools/workflow, but it's been done, to the limits of compute power, since forever.
Stochastic opacity will not save you from the terminal issue opacity has: the countless overlaps.
It will ameliorate it, but there will still be (many) specific scenarios where geometry would just breeze past without even waving.
As there will be a number of cases where opacity can't be done without (modelling semi-transparency with geometry and shader, but no opacity map, would likely prove a lot more expensive.).
Front-loading the work to get performance is of course not always feasible (for a number of reasons), but stating "it can't be done" isn't quite exact.
charlyxxx and vladimir.nedev : so i made a 100 mil tris terrain (made in houdini, with erosion, so to take topology-dependency out of the equation) VRScene file, and it's 5GB on disk.
In the video Charly posted his file is ~3GB, which sounds unfeasible, as it ought to be around 7.5GB instead.
Also, loading the file here is taking several minutes, and this all before the VRAM is occupied, it's all two-core CPU action and System RAM, so the video posted by Charly couldn't have possibly gone out of VRAM in such a short time.
I maintain this isn't a memory error due to polycount, irrespective of Lavina having the proper checks in place.
Charly, could you maybe share the scene file in question?
Comment