Project recreates cities in rich 3D from images harvested online



People are taking photos and videos all over major cities, all the time, from every angle. Theoretically, with enough of them, you could map every street and building — wait, did I say theoretically? I meant in practice, as the VarCity project has demonstrated with Zurich, Switzerland.

This multi-year effort has taken images from numerous online sources — social media, public webcams, transit cameras, aerial shots — and analyzed them to create a 3D map of the city. It’s kind of like the inverse of Google Street View: the photos aren’t illustrating the map, they’re the source of the map itself.

Because that’s the case, the VarCity data is extra rich. Over time, webcams pointed down streets show which direction traffic flows, when people walk on it, and when lights tend to go out. Pictures taken from different angles of the same building provide dimensional data like how big windows are and the surface area of walls.

The algorithms created and tuned over years by the team at ETH Zurich can also tell the difference between sidewalk and road, pavement and grass, and so on. It looks rough, but those blobby edges and shaggy cars can easily be interpreted and refit with more precision.

The idea is that you could set these algorithms free on other large piles of data and automatically create a similarly rich set of data without having to collect it on your own.

Subscribe to our Channel