TriplanarTex doesn't work with 2d displacement. Any chance to change it?
Announcement
Collapse
No announcement yet.
Stochastic flakes plugin, take 2
Collapse
X
-
Originally posted by Mnietek2707 View PostTriplanarTex doesn't work with 2d displacement. Any chance to change it?
Best regards,
VladoI only act like I know everything, Rogers.
Comment
-
My Spanish tutorial channel: https://www.youtube.com/adanmq
Instagram: https://www.instagram.com/3dcollective/
Comment
-
Out if interest I created an OSL Triplanar shader, obviously the VRayTriplanarTex plugin is a better option, but here is the shader in case anyone is interested.
Either unpack the attached ZIP or save the code below to a file called triPlanar_v001.osl
Use the VRayOSLTex map (not material) and load in the OSL file.
Code:/* triPlanar_v001 Basic TriPlanar shader. Maps three maps onto an object from three different sides and blends between them on the edges. Good for noisy textures like rock to get the appearance of seamless blends without UV mapping. Created 2015/12/04 by Rens Heeren (mail@rensheeren.com) Usage: Blend is the blend amount between the sides (softness). Scale is the texture size. You can use the texture input of X for Y and Z with use_texX_for_all. Todo: - Add a better blend function. - Add different coordinate systems (uses only local space now). - Add transforms. - Add blend noise. UVs: x = y,z y = x,z z = x,y */ shader triPlanar_v001 ( string tex_X = "", string tex_Y = "", string tex_Z = "", int use_texX_for_all = 1 [[ string widget = "checkBox" ]], float blend = 0.4, float scale = 1.0, output color out_tex = color(0.0), output color out_nrm_object = color(0.0), output color out_nrm_blend = color(0.0), output color out_pos = color(0.0) ) { float fBlend = blend; float fScale = 1/scale; int iUseOnlyX = use_texX_for_all; point pObject = transform ("common", "object", P); normal nObject = transform ("common", "object", N); normal nObjectBlend = clamp (((((abs (nObject))-0.5) * 1/fBlend)+0.5),0,1); float fNormalizeVal = nObjectBlend[0]+nObjectBlend[1]+nObjectBlend[2]; nObjectBlend /= fNormalizeVal; color colNrm = abs (color (nObject[0],nObject[1],nObject[2])); color colNrmBlend = color (nObjectBlend[0], nObjectBlend[1], nObjectBlend[2]); float fUX = pObject[0] * fScale; float fUY = pObject[1] * fScale; float fUZ = pObject[2] * fScale; color colTexX, colTexY, colTexZ; colTexX = texture (tex_X, fUY, fUZ); if (iUseOnlyX) { colTexY = texture (tex_X, fUX, fUZ); colTexZ = texture (tex_X, fUX, fUY); } else { colTexY = texture (tex_Y, fUX, fUZ); colTexZ = texture (tex_Z, fUX, fUY); } out_tex = (colTexX*colNrmBlend[0]) + (colTexY*colNrmBlend[1]) + (colTexZ*colNrmBlend[2]); out_nrm_object = colNrm; out_nrm_blend = colNrmBlend; out_pos = color (pObject[0],pObject[1],pObject[2]); }
Attached Files
Comment
-
I have a model that looks perfect using the vraytriplanar texture. Now I need to send it to unity and want to bake the textures out.
As the uvw co-ordinates are handled by the vraytriplaner texture in the shader is it possible to render out the shader using render to texture ?
Comment
-
Been struggling to get my head around this all day.
I have rendered an image using the box camera type in vray of a high res car interior.
I can select all my low poly car interior apply a vray shader with a vraytriplanar texture in the self illumination map slot.
I use a vrayhdri set to mapping type cubic and if I select the camera I rendered the cube image as the reference node.
If I render the inside of the car interior even though it is low poly the high res image is mapped perfectly.
So far so good.
If I render to texture (either unwrapped or using automatic unwrapping) and generate a vraycomplete map I was hoping it would map the baked image perfect back onto the model but it has not.
I am thinking that it is because there is no translation of the cubic map type in the vray hdri map done by the render to texture....maybe .... to be honest I m a bit lost
Comment
-
Should probably have been a clearer and say why I wanted this to work.
The car is made up of thousands of objects so projecting from high res to low res clean geo would be a lot of work.
So I need to be able to project onto multiple objects at once. In the end I am camera projecting from 6 cameras and I think it will work fine but it is still complicated to manage.
I realize that this isn't really what the triplanar texture was for but it worked perfectly in taking a box render from vray and uvw mapping it correctly back onto geo. I have always been a confused as to what happens behind the scene between 3-D space uvw space and what the render engine finally outputs so maybe I am asking for something that vray can't do or can be done already and I just couldnt figure out how butf it can (maybe I didn't do it right) or could be made so that the the cubic map type would bake down with render to texture it would allow for a very fast workflow (no uvw mapping) to create nodal camera vr experiences using vray.
Comment
Comment