Who’s behind Project Tango?

I was thrilled to read the news of Google’s Project Tango. If you haven’t seen it yet, watch this:

In order to get a better understanding of what it’s all about, I started looking up the organizations listed in the introduction video and the credits at the bottom of the Project Tango homepage. Here’s a list of the participants and some speculation about the nature of their involvement, based on their expertise:

Bosch: An enormous electronics firm with expertise that would be useful to most aspects of the project. An official Twitter post indicates that Bosch supplied a 9-axis Inertial Measurement Unit (IMU.)

BSquare: A reasonably large mobile application developer that doesn’t seem to have a great deal of specialization in 3D capture and image recognition. BSquare has a substantial interest in the Internet of Things (IoT), but its involvement may have been limited to the front-end apps that present information specific to Tango, or the required modifications to Android.

CompalComm: A Taiwan-based manufacturing and assembly company. It’s likely CompalComm did the final manufacturing and assembly of the handsets.

ETH Zürich: A university in Austria with an excellent computer science department, with groups focusing Interactive Geometry and Advanced Interactive Technologies in particular. ETH Zürich probably contributed research and development in these areas.

Flyby Media: A startup that has developed a really interesting app for iPhone that establishes tangible objects as the reference point for discussion, so that discussions about snowboarding might be clustered around images (and presumably 3D models of) your snowboard. It’s likely that Flyby is expanding the object-centeredness of its thinking by making use of real 3D models, and exploiting Tango’s improved awareness of locality to present this perspective.

George Washington University: A school with a strong background in medical imaging applications for computer graphics. It’s possible that GWU is offering some expertise on ALISA, an image processing algorithm developed there that runs very well in a parallel architecture.

HiDOF: A group of researchers from robotics incubator Willow Garage, HiDOF actually has a press release about its involvement in project Tango. HiDOF created the Simultaneous Localization and Mapping (SLAM) system that actually runs on the phone. In particular, it was responsible for estimating the phone’s position and orientation within the scene constructed. HiDOF provides some more details of the device and has four high-resolution pictures of the device in operation, which tell an interesting story about the state of the technology and what might be possible with it.

Picture 1 | Picture 2 | Picture 3 | Picture 4

MMSolutions: A Bulgaria-based company with 14 years of expertise in mobile imaging, which may have provided algorithms for image processing such as Optical Image Stabilization (OIS), Zero Shutter Lag (ZSL), or provided some of its expertise on stereoscopic imaging.

Movidius: A Silicon-valley startup who has created a chip architecture specialized for computer vision, which is driving the power-efficient image processing on the Tango. Movidius also has a press release on its involvement.

University of Minnesota: Another school with a strong computer science department, UoM has a research focus on graphics including vision, Human-Computer Interaction (HCI) and Robotics. I’m guessing HCI, since it lists a project that produced “algorithms and interfaces for handheld devices to aid coordination in space and time.”

NASA JPL: The Jet Propulsion Laboratory is the R&D center for developing “robotic planetary spacecraft”. JPL knows an enormous amount about embedded systems, computer vision and image analysis, and has probably consulted on those areas.

Ologic: Another private robotics R&D agency that has a focus on toys. It’s likely that Ologic is developing a robotic toy that can make use of the Tango in a fun and educational way.

OmniVision: A very large manufacturer of image sensors, Omnivision probably provided the sensing hardware for the RGB and depth-sensing cameras.

Open Source Robotics Foundation: A robotics-centered startup in Silicon Valley, OSRF have a press release stating that it provided a range of development tools and worked on the data visualization. It’s done some interesting work on a Robot Operating System and simulating multiple robotic agents within a single scene.

ParaCosm: A startup devoted to “3D-ifying the world”, Paracosm has a press release describing its contribution to Tango – a system for mapping complex and arbitrary indoor scenes.

Sunny Optical tech: A large optical components manufacturer. Sunny probably made the special lensing required for the small form-factor depth-sensing camera and structured light projection systems.

Speck Design: An established product design company. Although it’s not actually listed as a collaborator, there’s someone from Speck speaking in the video, which suggests that it’s had a hand in guiding some aspects of the handset design. Speck has also had a history of collaboration with Ologic on their industrial robot design.

 What does this all mean?

The bulk of the collaborators listed so far represent a minimal team to get the hardware produced, let alone do anything with it. The only groups likely to be  involved as consumers of the data being generated would be Flyby and the Open Source Robotics Foundation, though Paracosm and the universities may also be working on wider applications as well.

This suggests that Google is opening the question of applicability up to everyone at a very early stage. Anyone who mentions the project at all stresses that the concept is still pretty rough, and so it’s likely that the practical  applications of  this early hardware are relatively limited. I’ll go into what I think could be done with Tango in a future post. We live in interesting times!