Can Satellites Learn to 'See' Poverty?

A new team improves on the old “Earth at night” technique.

The International Space Station passes over the glowing Earth at night. (NASA)

Imagine the Earth at night—the vast and curving darkness, splotched with rivulets of light. It is a gorgeous sight, and a familiar one. Today, this image often plays as a beautiful cliché, a pre-metabolized testament to human invention and connectedness, as likely to appear in Koyannisqatsi as in a Kia commercial. For economists, though, this spectacle is more than a symbol: It is a powerful data set.

For the last few decades, and almost since astronauts first captured images of the nocturnal Earth, researchers have recognized that “night lights” data indirectly indexes the wealth of people producing the light. This econometric power seems to work across the planet: Not only do cities glow brighter than farmland, but American cities outshine Indian cities; and as a country’s GDP increases, so does its nighttime luminosity. Two years ago, a Stanford professor even used night lights data to show that North Korean leaders were passing the costs of international economic sanctions down to farmers and villagers. As foreign governments imposed sanctions, Pyongyang became brighter and light from the hinterlands waned.

Night lights, therefore, appear to be an incredible resource. So much so that in countries with poor economic statistics, they can serve as a proxy for a regional wealth survey—except no one has to go house to house, running through a questionnaire. Yet research has also shown this not-a-survey will remain inexact: To a satellite at night, a few well-lit mansions and a dense but poorly lit shantytown can look nearly the same.

A new paper from a team at Stanford, published last week in Science, applies a trendy technique to this tricky problem. In order to make night lights more discerning, engineers and computer scientists fed a convolutional neural net—a standard type of artificial intelligence program—a series of data sets. They wanted to give it the insight of the night-light data while freeing it of its pitfalls.

First, they taught the neural net a generic image-recognition program that let it distinguish edges, corners, and more than 1,000 common objects. Second, they asked it to correlate a set of night lights data for a country with a daytime map of the same country, essentially teaching it what kind of features on the ground are more likely to make the surface brighter at night. Finally, they fed it a the highest-resolution household-wealth data that exists for that country, the World Bank’s Living Standards Measurement Study, indexed to latitude and longitude.

In effect, they tried to teach a neural net how to “see” poverty in satellite data.

And that’s important for a straightforward reason: “We don’t have very good data on where poor people are, especially in poor countries,” said Neal Jean, an electrical-engineering researcher at Stanford and an author of the paper.

Many economists, geographers, and governments agree. The data that the team fed into their model came from five sub-Saharan countries—Nigeria, Tanzania, Uganda, Malawi, and Rwanda—that all face unusual “data scarcity.” Jamon Van Den Hoek, a geography professor at Oregon State University who did not work on the paper, said overcoming the data scarcity of sub-Saharan Africa drives much of the research in the region. In the past few years, researchers have tried to deduce local economic fates by analyzing cellphone metadata or by detecting whether roofs are made of metal or thatch. In February, Facebook even trained a neural net to estimate village-level population data by identifying what buildings look like from above.

This paper doesn’t go that far. After building the model, the authors tested its predictions against the original night lights data and against the World Bank survey data. The model hewed much closer to the World Bank data than the night lights data alone did. Most importantly, models built using one country’s data—Tanzania’s, for example—worked quite well when applied to daylight satellite imagery from another, like Malawi or Rwanda. This suggests that the model located actual signifiers of wealth that hold across nations and geographies. And it validates the paper’s secondary argument: that night lights data is best used as one tool among many, rather than a single predictor.

But the researchers only used the model to predict national-level data. The model only predicted Nigeria or Rwanda’s inequality as a whole, and it did not produce a subnational wealth map. Van Den Hoek said this would be an important next step—that in order to be applied at scale, the model had to successfully predict regions of subnational poverty.

“We’re good, we know what’s happening in Lagos,” he said. “We don’t know what’s happening in these communities that are farther away from the main population centers. And those are not going to have a signal that’s strong enough to be captured.”

Poverty in sub-Saharan Africa, he said, is often determined less by national-level decisions and more on separation from national-level resource chains. Owners of small farms in the region support possibly half a billion people. Could the model predict how they were faring, especially village to village? It’s unclear whether the data sources that the team relied upon were too coarse to predict useful subnational scales, he said.

And the study has another weakness, too, that goes to the heart of its strength. Neural nets let researchers hunt for many correlations between data sets at once, and they free them from declaring clear relationships between the data at the start of model-building. But they can also produce software that seems totally strange to its human users: A neural network can see thousands of similarities between data sets that, to human eyes, look like so much noise.

These are the visual imprint of four different “filters” proposed by the neural nets. All are daytime features that correlate to nighttime brightness, and all of them make sense to human viewers: From left to right, they seem to pull out an image’s urbanity, farmland, water, and roads. (Jean et al. / Science)

“They’re not very interpretable, and it’s hard to know what’s happening,” said Jean. “That’s definitely a legitimate concern if you want people to trust and use the results.” So he was relieved to see that the model pulled out all sorts of variables from the daytime imagery that are clearly correlated to nighttime brightness (at least, as mere mortal people understand it). These include the presence of water, roads, and farmland.

Still, “many of the neurons pick up things that as a human you go, okay, what is it actually picking up here?” he said. But as the team works with more precise data, at smaller scales, it’s still finding serendipity in the model’s choices. After being asked to predict urban wealth with high-resolution satellite imagery, a neural net started counting swimming pools.

SaveSave

Robinson Meyer is a former staff writer at The Atlantic and the former author of the newsletter The Weekly Planet.