camera mapping

in regards to the cool mini cooper siggraph demo.



I looked threw your tutorials on baking uv’s to partilces using box 3 data flow, and the scripted method, I also tried the box 1 mapping object.



box 1 was slightly faster than data flow method, and these both worked much faster than the script.



All 3 methods are painfully slow when u start cranking up the particle amounts.

i initialy felt this was not a big deal as only the first frame was very slow.

But on attempting to mimic the mini cooper effect a full 3d orbit of the car is needed to map on to the particles, but as only the 1st frame is evaluated this does not work.

And u cant evaluate at every frame (so slow) as it would be as if u were simply projecting on to the Krak loader.





any ideas?



cheers



sam

Sam,



As mentioned already, we used our Frantic Camera Map plugin to do the Mini. In fact, I am using the same procedure right now in production and it has no slow downs whatsoever.



I have no idea whether we have any plans to make our Camera Map texture available commercially as it is the sort of strategic piece of software that makes our pipeline work better than other people’s. ;o)



I provided the tutorials for those who really really wanted to recreate the effect regardless of time, but obviously we do not use these methods in production for the reasons you outlined.

" In fact, I am using the same procedure right now in production and it has no slow downs whatsoever."



thats fantastic, shall have to make an attempt to recreate that method.





time constriants aside,



unless im mistaken (very possible, heh) the mini cooper effect could not be recreated using the methods outlined in the tutorial?










time constriants aside,



unless im mistaken (very

possible, heh) the mini cooper

effect could not be recreated

using the methods outlined in

the tutorial?





Actually, the Box #3 tutorial recreates exactly what the Frantic Camera Map would have done, it is just a slower hack. So you are mistaken, speed aside it is identical. Oh, and it needs to take into account texture animation, something I have not tried with the tutorial’s setup.


Where are you getting the particles from? If it’s emitting off an object, can’t you just inherit from the emitting face’s TVs?



Our solution of course is to use voxels, but that’s going to require a very expensive camera. :slight_smile:


  • Chad

Thats good to hear.

i will test further then and report my findings.

I would have to adjust the data flow to evaluate every frame though?





Chad

Yes im emmiting from an object, and im pretty sure the methods outlined in bobo’s tutorial are using the uv inherit method.

,not sure what u mean about using voxels?

interested to hear though





thx

No, they’re based on figuring out the cameraspace position of the particles. It’s the best, most accurate, and most universal method, but you might be able to get away with something cheaper.

hmmm, sounds interesting, anywhere I can find out more about this method?





cheers

The idea is just to map the emitting object, then get the mapping for the particles from that. Problem is that it’s not per pixel, but per TV, but that might not be an issue in some cases.