Zachernuk

tnSuperman tanktn oParticles uExtrusion j3d sandy flExtrusion fuelyoursenses meshCam bufferSnake dMap2 dMap pv2 cursorFire opticks faceGame FPSnake pv preOpticks painty youtuber fractalFIngers 2dbals

November 12, 2009

Freaky awesome 3d camera-projection

Filed under: flash,Webcam — Tags: , , , , , , , , , — Brandel Zachernuk @ 6:24 am

After promising myself I would go to bed early tonight, I got sidetracked looking through Mr Doob’ s excellent flash experimentations.  Some of which involve camera input, some involving Pv3d.  A few involve both!  I have only very recently begun to dabble in camera input, but I have some experience with pv3d,  so I decided to have a quick go – making a relief map out of the greyscale pixel values of the camera feed.

meshcam

One thing I hadn’t thought of was the fact that, on a laptop with a monitor-fixed webcam, in a room where the monitor is the only light,  The relief map does a reasonable job of being a depth map!  The luminance might map to the square of the height or something but you can see from the image above how well it works.

You can try it for yourself if you like – Note the usual applies with the webcam… If you can see what it’s like in a darkened room, do try it! I’d love to see what other people come up with!

(mouse controls rotation, buttons toggle params – try the relief map with the original bitmap laid on top for freaky 3d-ness!)

Get Adobe Flash player

Update 2009-12-26

It is possible to make a 3d depth map in realtime, although there are a couple of hurdles to jump through. Even on a reasonable computer, the highest poly count you can expect out of Flash at present is about 5,000 triangles, so the resolution of the mesh used in this Flash experiment is 64 x 48.

If you were to do the same thing in a language that gave you access to DirectX or openGL you can expect to have poly counts in the millions, so one vertex per pixel is an easily achievable goal.

The next part is more complicated: Getting a color channel to look like a depth channel. The experiment above is using the red channel as the depth map. This will yield a real depth map, but only under these two conditions:

1. The light must originate from (close to) the camera. That way, the brighter a point is, the closer to the camera it is. For objects that are reasonably distant – say, 50 cm away, it is sufficient for the light to be within 5cm, but the closer the lights are, the better. You can buy some webcams with a built-in ring of LEDs – these are ideal.

2. All the objects must exhibit Diffuse / Lambertian luminance. Not as complicated as it sounds: Basically, when an object has a mirrored or a shiny surface, its brightness depends on both how it is lit and the camera position (where you are looking at it.) When you change your point of view, you change the brightness of the surface. There are commercially-available Lambert materials like Spectralon, but an unfired white clay will do just as well.  In fact, you could probably dust an object in any white powder – something like flour or baking soda,  light it under the right conditions and see what it looks like.

If anyone tries this I would love to see the results!  Under the two conditions above,  you should be able to use my mesh experiment above to make  meshes that show depth.  If you’re interested in some of the lower-level stuff in this subject, it’s part of Computer Vision, specifically Shape from Shading. As with any vision project, the best port-of-call is the openCV project.

Powered by WordPress