Following on from the post about the voice input, I thought I’d show the camera-based equivalent that I worked on earlier in the year. It’s based largely on the functionality Zugara makes use of. Even though webcams are nearly universal on laptops and are pretty common on extra monitors, very few applications are written that make much use of them.
Toward that end, I wrote FlyToy, (Flash Eyetoy) – a very simple library that can be used to track camera activity in arbitrary regions of the screen. It works the same way as the PS2 Eyetoy: rather than trying to figure out what the pictures mean, it just checks for movement in different regions of the screen. Input happens by flapping your hand (or head / foot / etc) at a region of the screen. A copy of the video feed is overlaid for reference purposes.
Here’s an un-optimized sample that calculates the activity across the whole image – it gives you an idea of what’s going on. In practice, it’s only necessary to check the activity inside the areas you’ve chosen, which makes the system much more efficient.
Notes on use: Enable the camera and then wave your hands at the orange rectangles. When the activity is sufficiently high in a rectangle, you’ll be presented with new targets. What you see by default is the ‘activity’ map; hit the toggle button to see the standard feed. For Mac users, if it’s not working then make sure that the correct camera has been selected.
If you want to try out the library, you can download it and a sample project below. I’m working with FlashDevelop, which can be obtained here, and the Opensource Flex SDK, which is available from Adobe here. I hope you like it!