My guess would be no, you would have to wait for us to implement deep image output (which is on the ToDo list anyway). That being said, we would LOVE to hear your ideas about how you think it should work and what workflows you would envision in Nuke or elsewhere. What particle channels would you like to encode in the deep output?
For example, one of the possible workflows given deep information about Color and Density would be the ability to insert a 3D geometry object inside a volumetric rendering and change its Z depth position, producing correct coverage both in front of it, and behind it (both for correct AA and esp. if the object itself is semi-transparent). But in the case of Krakatoa, people would also expect an object inside the volume to cast volumetric shadows into the volume, which is unlikely to happen using just deep data as lighting and attenuation / absorption are part of the rendering process.
Last Siggraph we had a demo at our booth demonstrating Krakatoa rendering inside a compositing package (Eyeon Fusion in that case) which would solve a lot more problems than the attempt to do volumetric compositing in post since both lighting and rendering would be part of the compositing workflow.
Any feedback regarding what you hope to achieve using Krakatoa and deep compositing would be very welcome!