[Idea] Using camera motion to detect closest objects


(David Teller) #1

Here’s a random idea, hoping that we can start brainstorming on it. With any luck, someone with better knowledge of computer vision than me can tell me if it’s feasible.

Users holding a camera are going to move this camera, whether they want it or not. As a consequence, for objects that are close enough to the camera, we’ll get slightly more information than a 2d picture. How hard would it be to extract some 3d information from this 2d picture to filter out big chunks of the background?


(Chris H-C) #2

Would this require anything more complicated than computer stereo vision?


(David Teller) #3

Well, there is the fact that images can be blurry. But other than that, I don’t think it’s any different. I’m just not sure at all how we can go from the idea to the implementation.


(David Teller) #4
  • Some (mostly unreadable) documentation on how to do this with OpenCV here.
  • A Python module that apparently should do the trick: OSM-Bundler.