I sure wish I could do a “Show Data” ala Box#3 with just a few (or even one) sample particles.
Definately second that! Diagnosis all the way
Regards,
Thorsten
I third it?..
You guys have to wait in line, I was first there
I have already “requested” this internally, at least for a single sample particle (potentially with a control that specifies which one particle to use). Given the speed of the system. I don’t see why it couldn’t show more than one.
Stay tuned!
Yeah, just one particle would be enough, maybe with a “randomize button”? But more would be pretty sweet too.
I was trying to set up the absorption color by setting the target color and inverting and offsetting it, but a little double check on the triplet result would have put my mind at ease.
BTW, the negative absorption seems fine to me. I ended up making a sabertooth tiger with a glowing red skull, pretty crazy looking. What are the chances this could be made to work with particles (not voxels)?
To quote Darcy, “Someday, but not now”.
What are the chances to see that tiger?
It’s a fossil, not a tiger, but yeah, I’ll post it here today.
The problem I have with the voxels (and why I ask about the particles) is that with particles, I get to put the data where I need it, and it’s very high precision. I’m still having a problem with the voxels aliasing. Meaning aliasing with the underlying pointset, not aliasing in pixels. I get weird things happening when I slowly move something (anything, really). And while I could crank up the filtering, and crank down the size, I just end up with a blurry slow render. I’m still working on it, but I’m just not getting the results I want.
It’s actually a cheetah skull. The fossil one was a different dataset. I get confused sometimes… I’m not happy with it, but it’s a learning experience nonetheless. I’m trying to animate some parameters with the KCE too, and that’s seeming to work out ok. Need to get this beta on the farm though, as it’s only 2GB in size, but over 1 hour per frame.
Can you create a simple animation showing the aliasing you mentioned? That way we will both be on the same page, and I can look at tackling that problem.
Yeah, I’ll get that to you next week.
The GeoVolume shows the problem nicely though. The particles are made in an grid oriented to the object (or world? I forget) but the planes are oriented to the light and the camera. There’s no voxel sampling setting currently that will ensure that all the voxels contain the same number of particles. Unless the camera and light we aligned to the grid, but that’s not going to make a good render.
You could make the voxels very small, say 1/100 of a screen pixel, and set the filter to be large, but that’s super slow, and the large filter size will make everything soft. I’ll try to play with that more next week.
I guess better late than never, I fifth “it”. Anything to help simplify the complexity would be especially helpful for not as savvy folks such as myself.
or a percentage?
I’m going to try the chameleon with the latest build now to check for the clipping issue.
AcinonyxJubatus_AnimatedDensity_C’A01.mp4 (75.3 KB)
For the love of all that is good and Krakatoa-like, please don’t send .mp4 files! I can’t get Fusion to load it, let alone Quicktime’s crappy player. Can you re-post that video in a different format?
Crap. x.264 codec is nearly magical in how well it compresses. I’ll try some other containers… Seems avi isn’t an allowed extension…
I dunno how much you care to mess around in Fusion, but ffdshow WILL let you play .mp4 files in Fusion (32bit). When Fusion sees the mp4 file, it tries it against DirectShow codecs, and ffdshow inserts itself to do the decode.
AcinonyxJubatus_AnimatedDensity_C’A01.rar (74.8 KB)
Just a note that the first phase of a Debug mode just went in today.
It is still sort of a prototype - it does not read the actual data from PRT channels yet but lets you specify stand-in values for any Channel Input to see how they would flow through the KCE. On the positive side, you get to see every value in every Operator and the Output node at once, so tweaking some Value Input node or changing Operator options is reflected by the whole flow immediately. We will see if we can make it work with the actual PRT data later, but it is a great start. Also, it is exposed to MAXScript as a dedicated function so you could develop your own debugging tools around it.