Announcement

Collapse
No announcement yet.

SDK for 3dsMax: VRenderInstance best practices?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    So I've changed the code a bit (now my instances inherit from ShadeData instead of DefaultShadeData) and override getUVWderivs and getUVWbases functions using the code for triangle meshes found here:

    https://forums.chaosgroup.com/forum/...nd-getuvwbases

    I multiply the result of getUVWderivs by 0.05 and that gives me roughly the UVW filter blurring I'd expect. I don't really like having that magic multiplication value in there but it's a workaround for now.
    Last edited by tysonibele; 27-01-2019, 03:55 PM.

    Comment


    • #32
      Sorry about that - when I added the instancing code I forgot about setIntersectionData. The code there gets the object to world transformation from the StaticMeshInterfacePrimitive like this:

      Code:
      const VR::Transform &tm=VR::Transform(((VR::StaticMeshInterfacePrimitive*) mesh->owner)->tm);
      instead the code should be something like this:

      Code:
      const VR::Transform &tm=mesh->tms.objectToWorld;
      and for the motion blurred case something like this:
      Code:
      const VR::Transform &tm0=mesh->tms.objectToWorldT0;
      const VR::Transform &tm1=mesh->tms.objectToWorldT1;
      The StaticMeshVoxelPrimitive::tms/MovingMeshVoxelPrimitive::tms is the right place for transforms especially when having instanced voxels.
      Yavor Rubenov
      V-Ray for 3ds Max developer

      Comment


      • #33
        Thank you, worked like a charm!

        By the way...did a render test with my new instance implementation vs my old one.

        Scene: 10 million instances of a piece of geometry with 4k triangles. (40 billion triangles total)

        Old implementation: 48 minutes, 60 gigs of RAM.
        New implementation: 90 seconds, 30 gigs of RAM.

        So it's 60 times faster and uses half the RAM (and since a decent chunk of RAM is used by me to manage the data used to generate the instances, I could probably optimize my storage objects a bit and get that lower).

        Safe to say, it was well worth the work required to get this all working!

        Comment


        • #34
          Another question...I don't see a way to define face normals in a instance (doesn't seem to be a channel that can take such data)...only vertex normals. However, vertex normals (being the average of all adjacent face normals) aren't giving me correct renders...for example, imagine a world-aligned cube...all of its vertex normals will be diagonal, pointing outwards from each corner...but using those vertex normals to render an instance results in the geometry being treated like it's smooth (whereas it should treat the cube's faces as hard surfaces, with normals aligned to the world-axis). I would have expected that I'd be able to define face normals pointing outwards from each face and vray would interpolate the normals of adjacent faces based on whether they share the same smoothing group...but I don't know how to tell instances to do that.

          Attached is an image comparison between a cube rendered with my vray instance implementation, and a regular 3dsmax cube object. As you can see, the instance's normals are being rendered incorrectly. The green lines are the vertex normals I pass to vray. The blue lines are face normals that I'd like to pass to vray, but don't know how. It's worth mentioning that I pass my smoothing groups to the FaceInfoData channel but that has no effect. I see a 'smoothed' param in the VoxelInfoData struct but I'm not sure if I should be toggling that? I don't want to auto smooth the whole mesh, I want to smooth based on face normals and smoothing groups...
          Last edited by tysonibele; 29-01-2019, 03:41 AM.

          Comment


          • #35
            Generally you can skip the VERT_NORMAL_CHANNEL at all - you'll get the geometric normal of each triangle calculated automatically.
            Alternatively if you have face normals calculated you can specify the same index in the 3 integers of FaceTopoData and you will get no averaging.
            Yavor Rubenov
            V-Ray for 3ds Max developer

            Comment


            • #36
              When I don't specify vertex normals (or when I specify 3 identical normals indices per face) the mesh is faceted, and not smoothed based on the smoothing groups I assign to faces. When I do specify normals, I get smoothing, but just from the normals (like in the previous image I attached), not from the assigned smoothing groups. Are face smoothing groups ignored?

              Comment


              • #37
                So it seems like the attributes I'm setting in the FACE_INFO_CHANNEL are being ignored. Is this the proper way to initialize it?

                Code:
                VR::MeshChannel &faceInfoChan = mesh.channels[mesh.numChannels++];
                faceInfoChan.init(sizeof(VR::FaceInfoData), numFaces, FACE_INFO_CHANNEL, 0, MF_INFO_CHANNEL);
                VR::FaceInfoData *infoData = (VR::FaceInfoData*) faceInfoChan.data;
                Last edited by tysonibele; 02-02-2019, 01:18 AM.

                Comment


                • #38
                  Sorry for the delayed reply.

                  The mesh voxel primitives that you are using are not doing any calculations related to smoothing the normals - they just use the normals you specify(or the geometry normal). The smoothGroups member of FaceInfoData is only used as a data storage for that info - for example when exporting mesh to proxy from Max we store the smoothing groups Max provides in there and if you then want to convert the proxy back to mesh we restore the smoothing group information.

                  So in short - it's up to you to calculate the smoothed normals and to supply them in the voxel channels.

                  There is a helper function in our SDK that can calculated smoothed normals based on the angle between normals. It is located in the smooth_normals.h header.
                  Yavor Rubenov
                  V-Ray for 3ds Max developer

                  Comment


                  • #39
                    Ah....gotcha...didn't realize the smoothing group values were purely for importing/exporting.

                    I define everything manually now (vert norms = averaged norms of adjacent faces that share smoothing groups) and am getting identical results to max's built in smoothing group method. So, yay

                    Comment


                    • #40
                      Good

                      Probably we should add some comments about the smoothing groups (I also had to do a little digging to find out that they are only used for import/export).

                      Yavor Rubenov
                      V-Ray for 3ds Max developer

                      Comment


                      • #41
                        Hey so I'm back again with another question regarding instances...

                        I'm having issues with motion blur. Namely, I can't figure out the proper way to calculate the motion-blur-dependent transforms of my instances.

                        Currently what I'm doing is:

                        I call getMoblurInterval to get the start/end tick of the motion blur interval. Then I query my particles at those times, and use the transforms of the particles at those times as the tm0 and tm1 of my instances (stored in the TMS_CHANNEL). However, for moblur durations less than 1, this results in incorrect motion blur.

                        After closely examining the results of my tests, by measuring the exact distances that particles travel across the subframe range over the blur interval and comparing the value to what is rendered, it seems that the calculations for the transforms for the TMS_CHANNEL of the instance voxel should actually be:

                        tm0 = location of instance at moblur interval start tick
                        tm1 = location of instance at (moblur interval start tick + GetTicksPerFrame() * perTickVelocity)

                        In other words, the second transform should be spaced apart from the first transform by the per-frame velocity value of the instance (and is therefore moblur duration independent), rather than the location of the instance in space at the moblur interval end tick.

                        Is that correct?

                        Comment


                        • #42
                          Originally posted by tysonibele View Post
                          I call getMoblurInterval to get the start/end tick of the motion blur interval. Then I query my particles at those times, and use the transforms of the particles at those times as the tm0 and tm1 of my instances (stored in the TMS_CHANNEL).
                          That should generally work the way you are doing it. One thing that comes to mind - do you have changes in the topology of meshes in the motion blur interval (f.e changing number of faces/vertices) ?

                          We do something like this for regular meshes:
                          Code:
                          int numMeshes=vray->getSequenceData().params.moblur.geomSamples;
                          double frameStart, frameEnd;
                          VRenderInstance::getMoblurInterval(vray, frameStart, frameEnd);
                          
                          TimeValue instanceTime;
                          float relativeTime;    // Relative time which maps [0.0; 1.0] to the camera (global) mo-blur interval.
                          
                          for (int i=0; i<numMeshes; i++) {
                              if (i==0) {
                                  instanceTime=(TimeValue) frameStart;
                                  relativeTime=0.0f;
                              }
                              else if (i==numMeshes-1) {
                                  instanceTime=(TimeValue) frameEnd;
                                  relativeTime=1.0f;
                              }
                              else {
                                  float k=float(i)/float(numMeshes-1);
                                  instanceTime=(TimeValue) (frameStart*(1.0f-k)+frameEnd*k);
                                  relativeTime=(float) ((double(instanceTime)-frameStart)/(frameEnd-frameStart));
                              }
                          
                          
                              Matrix3 transform=node->GetObjTMAfterWSM(instanceTime);
                              ...
                          }
                          Some of the calculations here are done in double to get correct results.

                          Yavor Rubenov
                          V-Ray for 3ds Max developer

                          Comment


                          • #43
                            Thanks Yavor, that code snippet helps

                            Comment


                            • #44
                              Hey Yavor, I have another question for you if you don't mind

                              Instancing is working great but now I'd like to support more than 2 transforms over the moblur interval, if that's possible. I'd like to support however many samples the "geom samples" setting is in the VRay settings.

                              Right now I seem to be doing everything necessary....I get the geom samples count from the vray render settings in FrameBegin and I generate an array of transforms over the moblur interval whose length is equal to the number of samples, and I pass all those transforms to the TMS_CHANNEL channel of my instances....but at rendertime the motion blur seems to only be using the first and last transform....blur of spinning objects that I would expect to be arc-like is still linear, even with 10+ geometry samples and all the correct transforms being stored in my instances. Any special flags I need to enable in order for the instances to provide more than 2 transforms to the sampler?

                              Perhaps I need to provide properly interpolated data within the setIntersectionData functions of my instances? Right now I see they work only with 'objectToWorldT0' and 'objectToWorldT1' of the MovingVoxelPrimitiveTransforms struct to interpolate vectors, as per the sample code provided previously.
                              Last edited by tysonibele; 24-07-2019, 05:41 PM.

                              Comment


                              • #45
                                Hey,

                                I don't mind at all

                                For multi segment motion blur you would need to create separate primitives for each time segment and to set its time index - then each primitives will get different transforms.
                                Yavor Rubenov
                                V-Ray for 3ds Max developer

                                Comment

                                Working...
                                X