We used the String Augmented Reality SDK to display real-time 3d video+audio recorded from the Kinect.
Libfreenect from http://openkinect.org/ project was used for recording the data coming from the Kinect. A textured mesh was created from the calibrated depth+rgb data for each frame and played back in real-time. A simple depth cutoff allowed us isolate the person in the video from the walls and other objects.
Using the String SDK, we projected it back onto a printed image marker in the real world. We also experimented with actively removing the image marker from the scene using camera data from the areas surrounding the image marker.