tourszuloo.blogg.se

Millumin kinect
Millumin kinect





millumin kinect

We needed to count the blobs, determine their location, and trigger an animation where their x and y is positioned. Once that was solved, we just needed to do the rest of the project in one night. After that, we decided to give the kinect 1 a shot (we had been using the 2 up until now) and tried the whole slew of images just mentioned until the RGB image was the winner!

millumin kinect

For a second, we tried a multi-Kinect setup but that just seemed suicidal. We moved on and tried the blob scanner, the blob detection and the open cv libraries, none of that worked either. We tried using the depth image, IR (infrared) image, registered image rgb image and the array of raw depth from the kinect. I think Aaron and I sat and messed around with the Kinect and processing for a good 3-4 days with nothing working.

Millumin kinect how to#

Once we locked down our idea, we needed to figure out how to make it work. What about the technology? Using processing and a Kinect, we will calculate the position of the dart on the map, process that data, and in return overlay (with projection mapping) an animation over the dart that shows the ramifications of war actions. When the dart lands on the map, an animation will get triggered that will simulate an atomic bomb exploding with the help of visual and audio cues. How does it work? A user will come up to a map on the wall and will throw darts at it – similar to a dart board. What is it? A commentary of the automisation and dehumanization of war through the visualization of untethered actions that leave behind a wake of consequences. The good part is, working with him (again) is the best, so we knew it would be hard work, but it didn’t matter. We had a total of 3 weeks for this, and I think our biggest deterrent was the fact that we spent 1.5 weeks trying to nail down an idea that both Aaron and me liked.







Millumin kinect