Am currently working a client server application for the Kinect3D. I am looking to take data from the Kinect and send it over a socket connection so it can be accessed in other applications such as Flash, Processing and Unity3D.
Will have a basic working prototype shortly for testing which takes joint data from a tracked skeleton and sends it over the web to a server that then distributes it an client that subscribes to the server.
It’s been written for PC only and developed in VS2012 with C#. Earlier I experienced poor support for the connect via Processing forums so decided to take this rout. the release of the K3D SDK by Microsoft last June supports the decision to go with C#. My immediate goal is to get the data into Unity3D where I can do some public prototyping.
The illustration below shows the current state of the Kinect client and server applications.Plenty to be done still.
The technical layer of prototype one involved a number of levels of technical investigation. Initially the idea was to use existing libraries in Processing to grab data from a webcam or Kinect and use that data to drive certain events on the screen. What appeared on the surface to be a relatively straightforward course of applying processing libraries such as the video library combined with openCV ended up being hampered by the outdated support in these libraries for video capture on Win7. Having tested a number of options for capturing video data from the onboard webcam I shifted instead to looking at the camera object in AS3. This was well supported and existing libraries ere freely available from soulwire. These were set up relatively easily and I developed a Trailer class to draw to the screen based on the coordinates passed back from soulwire’s MotionTracker Class. Problem with this however is that all the tracking and drawing is delegated to a single video card. In addition flash was a little slow in handling the graphics in realtime and the garbage collector (GC), while necessary, appeared to add to the problem of framerates. Nevertheless there was an opportunity to test in public at this point an a front projected setup was created in the IT Building in CIT to gather some idea about how the work performed in a public setting.
The research definition has raised some questions about how to provoke viewers of art into situations whereby they can be seen to be part of the artwork itself. Initially I am doing this in quite a literal way but attempting to integrate images of the audience/public into the artworks.
One of the components I am developing involves a using sensors through an Arduino to detect certain conditions in the space (e.g distance of view from a work) and when a condition fits a camera will capture an image of the audience.
Today I spent some time working in AS3 and AIR looking for appropriate methods to achieve this. Actually it turned out to be quite trivial and the image below is the first example of an image taken with a web cam and sent straight to the HD. Air provides some useful methods in the FileSystem package enabling you to bypass dialog boxes before writing a file to the local disk. (Like Director 7..sorry had to get that in).
This is an image taken automatically through an Air application and put straight to disk