I know there was some discussion about the speed increase by using an SSD to read and write particles, was there any new testing with 2.0?
Currently specing out a new PC to build and wanted to put some benchmarks infront of my boss to show him the performance increase
Currently doing research into high end Radeon 6000 series cards to help push vray around. Is there anything with frost / krakatoa that
benefits from high end GPU processing?
If i missed a tread somewhere already discussing the benchmarks, please just point me that way, and i’ll apologize for my sloppy searching
skills
Even without any changes to the hardware and reading a single PRT file (no multi-threaded reading), Krakatoa 2.0 should read up to two times faster than 1.6.x.
If you are loading multiple partitions in the same PRT Loader, the HDD or SSD will most probably be the bottleneck as we attempt to read as many threads as you have cores. So if you have 8 cores and 8 threads and a fast drive, Krakatoa 2 should be many times faster than 1.6.x…
At Siggraph, we put a (3 years old) Fusion-io card into our Big Machine and tested using 80 partitions with total of 50 million particles, duplicated 4 times for a total of 200 MP. The loading time from the card was 9.5 seconds, in other words less than 5 seconds per 100 MP or around 21MP/sec. There were no materials or KCMs assigned to these particles, they were loaded from the drive into memory as saved.
We have also performed some synthetic tests using our new PRT Creator object which produces the particles procedurally without reading from disk. Loading from it is also multi-threaded, and the results were identical - the best loading performance I got was 20.96 MP/sec, which means that the Fusion-io card was not saturated, we had most probably saturated the ability of Krakatoa to manage memory. Both sets of tests were performed on exactly the same machine, so we can assume that 21 million particles per second is what Krakatoa 2.0 can deliver on that particular hardware.
With default settings, 21 million particles require 640 MB of RAM. Since we were reading compressed PRTs from Fusion-io, the data that had to be read was about half that, and in fact the Fusion-io monitor showed a peak of about 300 MB/s, most of the time less. Chances are an SSD won’t be able to keep up with that, but who knows…
Need a Bobo Happy Buddha benchmark scene! With 100k, 1 million, and 10 million particles filling Happy Buddha and I will clock a couple of iterations with my Sata III SSD
I tell you the thing literally smokes with passes comped in Fusion! I am still a bit leery of it though, I don’t feel confident enough that I wouldn’t keep the only copy of a project on it.
Sweet! Thanks for the info! and yeah johnny, i’d be curious to see what you pull over there. Thanks again for the help –
i’ve been running an 8-core Mac with 16gb of Ram, and just standard drives. I try to keep one just for loading / saving PRTs
to try and keep the load apart from the system drive / network. But i usually run into bottlenecks with multi processing, mainly
pflow, so much of it is just legacy stuff, especially being in an old build. When I do hit things that are multiprocessing, it definitely
holds its weight
im excited to actually get into a proper PC, and hopefully start getting up to date with our software heh