Announcement

Collapse
No announcement yet.

Stochastic flakes plugin, take 2

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by super gnu View Post
    would it be possible to add a random offset/rotation per element, or per material id?
    Per-element no, but I can do it per material ID. Do you want offset or rotation or both?

    Best regards,
    Vlado
    I only act like I know everything, Rogers.

    Comment


    • #32
      ooow..yes! and per ID is far preferable to per-element.. much more flexible.

      hm i guess both offset and rotation, , although offset is probably more important.. might need the rotation to be optional, and/or lockable to 180/90 degree or free rotation. some textures (even noisy ones) have a direction after all.

      Comment


      • #33
        Glad I read this thread even though I have no need for Stochastic flakes
        A.

        ---------------------
        www.digitaltwins.be

        Comment


        • #34
          Vlado,
          Some people do "speed [modeling]" challenge... you do "speed [change of life]" challenge!

          Stan
          3LP Team

          Comment


          • #35
            Ok, did some extensive testing this morning, first off, thank you so much for finding the time to do this, I've been waiting 15 years

            Overall it works great. Here's 3 issues:

            1) I've found with my blended Box Map that with certain patterns you don't want the 3 planar projections to line up perfectly. Could we have 3 rotation parameters that lets us rotate each of the three planar projections?

            2) Can we have a point3 transform parameter so that we can change the "origin" of the projections inside the map?

            3) Right now the effect is in local space. This works well since the texture needs to stick to an object that goes through Translate/rotation/scale. However, if you have a complex assembly of objects that all overlap, you frequently want to see the texture cover the entire surface rather than restart itself on every new object. Would it be possible to do a hybrid space that is in world space with regards to the positional origin of the texture, but it actually in local space so that it respects the objects being translated / rotated / scaled? That's the way my blended box map basically works.

            You rock Vlado!

            - Neil

            Comment


            • #36
              Here's a UI mockup...

              Click image for larger version

Name:	triplanar.jpg
Views:	1
Size:	34.5 KB
ID:	858436

              - Neil

              Comment


              • #37
                Maybe instead of hybrid you could do something like have a "pick" / "Node" option similar to the samplerinfo position pass where you choose a scene node or a helper which sets the centre transform for the three projections?

                Comment


                • #38
                  Originally posted by soulburn3d View Post
                  3) Right now the effect is in local space. This works well since the texture needs to stick to an object that goes through Translate/rotation/scale. However, if you have a complex assembly of objects that all overlap, you frequently want to see the texture cover the entire surface rather than restart itself on every new object. Would it be possible to do a hybrid space that is in world space with regards to the positional origin of the texture, but it actually in local space so that it respects the objects being translated / rotated / scaled? That's the way my blended box map basically works.
                  I can't do that in the shader. I can only allow you to pick a NULL helper that can be used as the starting point. Is that OK?

                  Best regards,
                  Vlado
                  I only act like I know everything, Rogers.

                  Comment


                  • #39
                    Originally posted by vlado View Post
                    I can't do that in the shader. I can only allow you to pick a NULL helper that can be used as the starting point. Is that OK?
                    A pick node (points, nulls, etc) would be a good compromise.

                    But is there no way to have it always choose say 0,0,0 as the origin of the pattern, and retain the locking ability of local space?

                    Or I know some people have proposed a helper modifier that is basically just a gizmo, but whose location can control the location of the pattern.

                    - Neil

                    Comment


                    • #40
                      Originally posted by soulburn3d View Post
                      But is there no way to have it always choose say 0,0,0 as the origin of the pattern, and retain the locking ability of local space?
                      I can't imagine how that can be made to work from within a shader without extra info. If you show me the math, then I'm fine

                      Best regards,
                      Vlado
                      I only act like I know everything, Rogers.

                      Comment


                      • #41
                        If I knew the ins and outs of writing max shaders, I'd have written it myself. My version is just a script using 3 mapping modifiers, which has the advantage of being able to lock all the patterns together, but the disadvantage of having to copy modifiers around from object to object (and a horrible looking map graph), which is why I'd love to swap over to this map.

                        Lets do the pick node then. Thanks!

                        - Neil
                        Last edited by soulburn3d; 28-09-2015, 09:54 AM.

                        Comment


                        • #42
                          Originally posted by soulburn3d View Post
                          But is there no way to have it always choose say 0,0,0 as the origin of the pattern, and retain the locking ability of local space?
                          - Neil
                          Originally posted by vlado View Post
                          I can't imagine how that can be made to work from within a shader without extra info. If you show me the math, then I'm fine

                          Best regards,
                          Vlado
                          Maybe I'm mistaking the request, but Neil aren't you just asking for a 3 axis offset? Like in psuedocode:
                          vec3 Offset = vec3(-10,10,50);
                          vec3 LocalPos = vr_Position * vr_CameraToObjectMatrix;
                          LocalPos += Offset;
                          Vec4 TexColor = FancyVrayTriMapSampler(Map,vr_normal,LocalPos);
                          Gavin Greenwalt
                          im.thatoneguy[at]gmail.com || Gavin[at]SFStudios.com
                          Straightface Studios

                          Comment


                          • #43
                            Scratch that I re-read what Neil wants a few more times more closely. You don't want the same offset in every object, you want the same relative origin in different objects but in the shader. Sorry I misunderstood what you wanted neil and what you were proposing Vlado.

                            Vlado's solution of just picking an object/dummy seems best. Otherwise you would need to do one of two things:

                            A) pick a frame to be the "Sample point", this is how Nuke does a few tricks, and that works great in Nuke. For instance in the MatchMove tracker you can specify "Frame 30" to be your "0" frame where all of the transformations are zeroed out. In the case of the map you could say "I want the origin to be [35,21,15] rotated [45, 15, 90] at frame [36]." Then all of the objects would just natively maintain the same offset relative to where they were on frame 36. This seems like a really bad idea though since it would mean you would have to sample the object's transform matrix at a different frame millions of time for each ray. I imagine vlado you already have this capability to some degree for motion blur but probably not at an arbitrary frame for every node in the scene. If you (Vlado) wanted to go crazy you could identify which nodes are assigned the map and do it as part of the pre-calc and scene graph building phase and store a list of node IDs and their transform matrix at frame # (Assuming in Vray you can get the Node ID in an intersection event, which I assume you do since you have include/exclude for shadow rays etc.) So instead of consulting an include/exclude list and returning a Boolean you would return the transform matrix at the 'sample' frame for each object. It probably wouldn't take up *that* much memory.

                            B) You could write it as a modifier in which case you could do what the UVW Box modifier does which is give you an "Aquire" button which would manually trigger it an offset of the target aquisitions' transform. But then it would require Vlado rewriting this as a modifier instead of a shader.
                            Last edited by im.thatoneguy; 30-09-2015, 12:04 AM.
                            Gavin Greenwalt
                            im.thatoneguy[at]gmail.com || Gavin[at]SFStudios.com
                            Straightface Studios

                            Comment


                            • #44
                              I think Neil wants to apply the map on a bunch of objects as though they are one object (so that the map offset is properly displaced from one object to another based on their local offset relative to one another), but when those individual objects are moved/rotated/translated, they should keep the mapping that they had in their original position.

                              There's no way to do this from a shader alone; there must be a way to "remember" the original object transformation somehow; this is what the UV channels basically do. I want to avoid UV channels, but something should be able to tell the texture shader what to do. I was thinking that this could be done with node user attributes. I.e. you can select a bunch of objects; in the texture press a button "remember transform" which would write the current transformation into the node user attributes, and then the shader can read that transformation from there during rendering.

                              Best regards,
                              Vlado
                              I only act like I know everything, Rogers.

                              Comment


                              • #45
                                Originally posted by vlado View Post
                                I think Neil wants to apply the map on a bunch of objects as though they are one object (so that the map offset is properly displaced from one object to another based on their local offset relative to one another), but when those individual objects are moved/rotated/translated, they should keep the mapping that they had in their original position.
                                So in my second wish I was looking for a single vector that offsets all of the patterns equally.

                                But then in my third wish I was describing exactly what you describe above Vlado.

                                Originally posted by vlado View Post
                                I was thinking that this could be done with node user attributes. I.e. you can select a bunch of objects; in the texture press a button "remember transform" which would write the current transformation into the node user attributes, and then the shader can read that transformation from there during rendering.
                                That's a good idea as well. The one downside is it requires a baking phase to put the attributes into the objects. So if you move the objects or add new objects, you need to remember to rebake. I don't like UVs, but I also am not a huge fan of baking, since it slows down the iterative process (and if you forget to rebake, you end up spending time debugging why your pattern looks wrong).

                                I think I like your original idea of having a node to pick better. All of my object sets have a point helper for rigging purposes, and the objects transform with the point. So even though I have to pick the node at the beginning, once I've picked the node, I can move the objects, edit their meshes, and add new ones, and the shader will just work without the need to do any extra baking procedure.

                                So if I had to vote, I'd say the node feature would be more helpful to me. But if there's time, adding both the node and the transform baking would give people the maximum options.

                                - Neil

                                Comment

                                Working...
                                X