I was thinking about a way to show the new possibilities we have in Implicit shape to transfer color and uvs, and to apply in a way that is new, not so much used and that can be used in mappings or mograph videos, and here it is.
The fluid can have any texture, at the beginning you dont understand what do you see and at the end you see the complete image formed. Is not a reverse simulation (that can be also an option). How is it done resumed is: We create a standard particles animation moving (on this examples mostly is with flow in tp), we cache them and we retime the cache, adding the frame we want the texture to be complete at the beginning of the animation, and after we play normally the animation. We just need to transfer color or uvs on the first frame, and then to transfer this to the implicit shape you can follow this tutorial: Transfer color to Implicit Shape in TP.
However we need to have some considerations when we use this technique:
-We can not create particles over time:
All particles needs to be present from the first frame, we can not create them over time (rate/s) or delete them otherwise we will have most likely problems when we transfer the color. So when we born them use always Shot on the first frame, if you need to create them over time there is a trick: Born them all at the beginning on a “wait” group, and after send them overtime to the “active” group. All particles on “wait” can be invisible. So even they are on scene and TP has them in to account, you will not see them and will be exactly the same as creating it over time. In most of the examples on the video I use this technique.
The same technique can be used if you need to delete them, instead of using a kill node, just send the particles you dont want in to a “DIE” group without visibility. I used this on the sphere being filled with liquid, So when I pour the liquid on the sphere some of the particles go out of the sphere, I just send them to another group if they are outside the sphere.
-Datachannel cached problem:
This is kind of odd/bug. Not really a bug but can be some way improved on the future tp versions. To use color or uvs on implicit shape we need to store them on a data channel. In most of the examples (except the zombie) I write the data channel information post cache, since we need first to do a cache retiming, and after store the particle info on this first frame. However if you create the uvs (or color) data channel before doing the cache, when we override this data post cache on the first frame works, but after TP is still trying to read this data from the cache (even there is nothing) and is not keeping the data we just created.
We have diferent solutions, one is to do the cache without the uvw data channel created, but if we resim multiple times, create and delete all the time the data channel can be a pain, and we can forget it. Another option is to send all the particles to a new group, so on the original group where we create the cache we dont have any data channel, and postcache we send to active2 where there we have the uvw data channel created, this way we solve the problem. The option I use on this examples is a new utility in TP since drop 4, we can select what info we want to read or write on a cached dynamic set. So basically I set to dont read any data channel from my cache, this way we force tp to use the data created postcache. Small problem is that there is no way to define a specific data channel, it disables the reading from all data channels created, if you only use data for color and uvs like my case this will be totally fine, but if you need to acces other data post cache this will not be an option.
-Techniques to save color or uvs.
So how we can import color or uvs to the particles from other objects? On the examples I tried to show 3 diferent techniques, but there is much more options sure.
On the examples we see a plane I use an intersect node looking down (you can use any direction necesary) and Im outputing the uvw info found by the ray.
For the spherical ones is the same technique but the direction instead of defined by only one direction, I create a direction per particle, using the particle position and the position of the original object. So each particle is looking to the center of the sphere. Using this make sure the intersect node is pointing on the good direction (otherwise change the position inputs to invert your lookup vector). Also can be a good idea to check “2 sided” inside intersect node to be sure even the intersect ray is looking the back of the polygon will output information.
But the one used on the zombie maybe is the one that can be used on most scenarios. I use geometry helpers to read the uvs of the closest geometrical point, so with this you dont have to deal with vector directions, it simply will read the closest point that is what you will desire most of the time. Using a threshold in distance I can set a solid color for whatever is far away to the distance you want (to create a diferent color for the interior of an object for example(used on the fill sphere example)).
In top of that we can create uvs randomly o by position or by time. Uvw is a point3 with ranges between 0 and 1, so you can also be creative here.
-What to use, color or uvs?
We can use color or uv information, each one has some pros and cons. On the video examples I use uvs, uvs has much more detail since each particle is reading one uv on space, and the implicitshape reinterpolates the uvs so you will have all texture details there no matter implicit shape resolution or particle count. Only problem is that if your object has visible seams (is an unwrap model) you will see the seams on the implicit shape.
If we use Color information, every particles has a color assigned and this color is moved to the implicitshape, implicitshape will blend this color with all particles nearby. Using color the level of detail you can have is based in particle count and implicitshape resolution. So with small number of particles you will have a lowres texture even your original texture is hires. Good things on color is that seams will never be visible and that you can modify colors as you wish. Yo can change them based on velocity, or density, or whatever you can imagine.
-Using realflow, or krakatoa or genome with frost:
All the examples are created using only TP, for fluid creation, caching, texture info and meshing. However other combinations can be used.
If you are a realflow user, you can easily do your sims on realflow, save them as a .bin, and import it on TP with the importer that reads .prt and .bin files and follow from here with the same metode described. If you need a huge amount of particles can be a good idea to cache them using krakatoa with partitions for example. To do that, you already created the data channel to transfer color or uvs, so simply remember to save this channel on the .PRT, later on you will be able to use magmaflow to assign the color as a vertex color information to be transferred to frost. If you want to use uvs you will need genome. I will not show here the process since the basics are there, just check krakatoa/genome docs if you need to do it this way.
So this is all from this tutorial! Hope you can see the potential and create nice and new Fx for mappings or motion graphics! I think the possibilites are huge, think about a facade with water coming out from diferent windows with a wierd texture, that slowly is covering the whole facade and finally revealing the final texture (free idea from me!) for example..
If you like the tutorial and you will like to donate, here are all the examples showed on the video, a total of 5 scenes showing variations of the tutorial featured here. Be advise: You need 3dsmax 2015 or later and Thinking Particles 6 drop 4 or later. I dont include any texture (all are from random search in google). And the zombie scene I change the zombie (download from bidgem3d.com) for a teapot. You just will need to cache the files and use any texture you will like. Thinking Particles drop4 has a bug loading the first frame of a cache, this will be solved on drop 5 (Im on a beta where this is fixed). If you are not yet on drop5 you can solve that on drop 4 by freezing the frame you want during two frames instead of one, and transfer the uvs information on the second frame instead of the first one.