Predicting poverty from satellite imagery

Similar documents
Transfer Learning from Deep Features for Remote Sensing and Poverty Mapping

OpenStreetMap Road Network Analysis for Poverty Mapping

ECS289: Scalable Machine Learning

Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey

Perceptron: This is convolution!

CMU Lecture 18: Deep learning and Vision: Convolutional neural networks. Teacher: Gianni A. Di Caro

Deep Learning with Tensorflow AlexNet

Exelis Visual Information Solutions Capability Overview Presented to NetHope October 8, Brian Farr Academic & NGO Program Manager

Classifying Depositional Environments in Satellite Images

Opportunities and challenges in personalization of online hotel search

Getting Started with GeoQuery

Modeling and Monitoring Crop Disease in Developing Countries

CAP 6412 Advanced Computer Vision

Disguised Face Identification (DFI) with Facial KeyPoints using Spatial Fusion Convolutional Network. Nathan Sun CIS601

Inferring Maps from GPS Data

Geo-location and recognition of electricity distribution assets by analysis of ground-based imagery

Inception Network Overview. David White CS793

Activity-Based Human Mobility Patterns Inferred from Mobile Phone Data: A Case Study of Singapore

DEEP LEARNING APPROACH FOR REMOTE SENSING IMAGE ANALYSIS

Pouya Kousha Fall 2018 CSE 5194 Prof. DK Panda

INTRODUCTION TO DEEP LEARNING

Parallel learning of content recommendations using map- reduce

ImageNet Classification with Deep Convolutional Neural Networks

Learning to Match. Jun Xu, Zhengdong Lu, Tianqi Chen, Hang Li

Deep Learning: Image Registration. Steven Chen and Ty Nguyen

Crop Classification with Multi-Temporal Satellite Imagery

IMAGE ACQUISTION AND PROCESSING FOR FINANCIAL DUE DILIGENCE

UNICEF. Discovering data across organizations

08 An Introduction to Dense Continuous Robotic Mapping

APPLICATION OF SOFTMAX REGRESSION AND ITS VALIDATION FOR SPECTRAL-BASED LAND COVER MAPPING

Lecture 20: Neural Networks for NLP. Zubin Pahuja

Machine Learning. Deep Learning. Eric Xing (and Pengtao Xie) , Fall Lecture 8, October 6, Eric CMU,

Recurrent Neural Networks. Nand Kishore, Audrey Huang, Rohan Batra

ECS289: Scalable Machine Learning

Bilevel Sparse Coding

Gradient Descent. Wed Sept 20th, James McInenrey Adapted from slides by Francisco J. R. Ruiz

The Labs21 Benchmarking Tool

Using Machine Learning for Classification of Cancer Cells

Rotation Invariance Neural Network

CPSC 340: Machine Learning and Data Mining. More Linear Classifiers Fall 2017

Lecture: Deep Convolutional Neural Networks

Synscapes A photorealistic syntehtic dataset for street scene parsing Jonas Unger Department of Science and Technology Linköpings Universitet.

Lab 9. Julia Janicki. Introduction

INFORMATION NOTE. United Nations/Germany International Conference

Supporting Information

Lecture 12 Recognition. Davide Scaramuzza

Constrained Convolutional Neural Networks for Weakly Supervised Segmentation. Deepak Pathak, Philipp Krähenbühl and Trevor Darrell

CS 229 Project report: Extracting vital signs from video

Patent Image Retrieval

Data on the Web. Human Information Interaction for Knowledge Extraction, Interaction, Utilization, Decision making HI-I-KEIUD

Training models for road scene understanding with automated ground truth Dan Levi

IMAGINE Objective. The Future of Feature Extraction, Update & Change Mapping

A Climate Monitoring architecture for space-based observations

Breaking Points, Manual Labor, Detours and Bridge Building

A new approach for supervised power disaggregation by using a deep recurrent LSTM network

Corporate Private Networks Applications

DECISION TREES & RANDOM FORESTS X CONVOLUTIONAL NEURAL NETWORKS

Machine Learning 13. week

Lecture 13 Segmentation and Scene Understanding Chris Choy, Ph.D. candidate Stanford Vision and Learning Lab (SVL)

Deep Learning for Computer Vision II

PARALLELIZED IMPLEMENTATION OF LOGISTIC REGRESSION USING MPI

Encoder-Decoder Networks for Semantic Segmentation. Sachin Mehta

Interpretable Machine Learning with Applications to Banking

arxiv: v1 [cs.lg] 31 Oct 2018

Lecture 12 Recognition

Alternatives to Direct Supervision

Mobile Phones, Household Welfare and Women s Empowerment: Evidence from Rural Off-grid Regions of Bangladesh

Semi-supervised Learning

Object detection using Region Proposals (RCNN) Ernest Cheung COMP Presentation

Satellite-Based Earth Observation (EO), 6th Edition

Conditional Random Fields as Recurrent Neural Networks

Building Resilience to Disasters for Sustainable Development: Visakhapatnam Declaration and Plan of Action

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

2014 Google Earth Engine Research Award Report

A Brief Look at Optimization

Large Scale Manifold Transduction

CS229 Final Project: Predicting Expected Response Times

Hardware and Embedded Algorithms for Real-Time Variable-Rate Fertiliser

Allee-Center EssenAltenessen

Learning visual odometry with a convolutional network

Concept Note. Scope and purpose of the First Meeting: Objectives Expected outcomes Suggested participants Logistics and registration Discussion papers

Convolutional Neural Networks

Neural Networks. Single-layer neural network. CSE 446: Machine Learning Emily Fox University of Washington March 10, /10/2017

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

DEEP NEURAL NETWORKS FOR OBJECT DETECTION

GE Technology Powers Scalable Microgrid for Rural Electrification

Machine Learning Methods for Ship Detection in Satellite Images

Machine Learning Basics: Stochastic Gradient Descent. Sargur N. Srihari

TensorFlow: A System for Learning-Scale Machine Learning. Google Brain

Depth from Stereo. Dominic Cheng February 7, 2018

DEEP BLIND IMAGE QUALITY ASSESSMENT

Supervised Learning for Image Segmentation

Predict the Likelihood of Responding to Direct Mail Campaign in Consumer Lending Industry

Flow-Based Video Recognition

LSTM for Language Translation and Image Captioning. Tel Aviv University Deep Learning Seminar Oran Gafni & Noa Yedidia

A Systematic Overview of Data Mining Algorithms

Machine Learning With Python. Bin Chen Nov. 7, 2017 Research Computing Center

Structured Attention Networks

Engaging space tools for development on Earth contribution of space technology and applications to the post-2015 Development Agenda

arxiv: v1 [cs.cv] 11 Apr 2018

Transcription:

Predicting poverty from satellite imagery Neal Jean, Michael Xie, Stefano Ermon Department of Computer Science Stanford University Matt Davis, Marshall Burke, David Lobell Department of Earth Systems Science Stanford University 1

Why poverty? #1 UN Sustainable Development Goal Global poverty line: $1.90/person/day Understanding poverty can lead to: Informed policy-making Targeted NGO and aid efforts 2

Data scarcity 3

Lack of quality data is a huge challenge Expensive to conduct surveys: $400,000 to $1.5 million Data scarcity: <0.01% of total households covered by surveys Poor spatial and temporal resolution 4

Satellite imagery is low-cost and globally available Shipping records Inventory estimates Agricultural yield Deforestation rate Simultaneously becoming cheaper and higher resolution (DigitalGlobe, Planet Labs, Skybox, etc.) 5

What if we could infer socioeconomic indicators from large-scale, remotely-sensed data? 6

Standard supervised learning won t work Input Output Model Poverty, wealth, child mortality, etc. - Very little training data (few thousand data points) - Nontrivial for humans (hard to crowdsource labels) 7

Transfer learning overcomes data scarcity Transfer learning: Use knowledge gained from one task to solve a different (but related) task Train here Perform here Transfer 8

Transfer learning bridges the data gap B. Proxy outputs Plenty of data! Deep learning model Less data needed Would this work? A. Satellite images C. Poverty measures Not enough data! 9

Nighttime lights as proxy for economic development 10

Why not use nightlights directly? B. Nighttime light intensities A. Satellite images C. Poverty measures 11

Not so fast Almost no variation below the poverty line 12

Lights aren t useful for helping the poorest 13

Step 1: Predict nighttime light intensities B. Nighttime light intensities Deep learning model training images sampled from these locations A. Satellite images C. Poverty measures 14

Training data on the proxy task is plentiful Labeled input/output training pairs (, Low nightlight intensity ) (, High nightlight intensity ) training images sampled from these locations Millions of training images 15

Images summarized as low-dimensional feature vectors Inputs: daytime satellite images Convolutional Neural Network (CNN) Outputs: Nighttime light intensities {Low, Medium, High} f 1 Nonlinear mapping f 2 f 4096 Linear regression Night- Light 16

Feature learning f 1 Nonlinear mapping θ f 2 f 4096 Linear regression θ Night- Light min θ,θ i=1 m l y i, y i = min θ,θ i=1 m l y i, θ T f(x i, θ ) Over 50 million parameters to fit Run gradient descent for a few days 17

Transfer Learning Inputs: daytime satellite images Feature Learning Outputs: Nighttime light intensities {Low, Medium, High} Nonlinear mapping f 1 f 2 f 4096 Target task Poverty Have we learned to identify useful features? 18

Model learns relevant features automatically Satellite image Filter activation map Overlaid image 19

Target task: Binary poverty classification Living Standards Measurement Study (LSMS) data in Uganda (World Bank) Collected data on household features Roof type, number of rooms, distance to major road, etc. Report household consumption expenditures Task: Predict if the majority of households in a cluster are above or below the poverty line 20

How does our model compare? Survey-based model is the gold standard for accuracy but Relies on expensively collected data Is difficult to scale, not comprehensive in coverage 0.8 0.7 0.75 0.71 Accuracy 0.6 0.5 0.4 0.53 0.3 0.2 0.1 0 Survey Nightlights Transfer 21

Transfer learning model approaches survey accuracy Advantages of transfer learning approach: Relies on inexpensive, publicly available data Globally scalable, doesn t require unifying disparate datasets 0.8 0.7 0.75 0.71 Accuracy 0.6 0.5 0.4 0.3 0.53 0.2 0.1 0 Survey Nightlights Transfer 22

Our model maps poverty at high resolution Smoothed predictions District aggregated Official map (2005) Case study: Uganda - Most recent poverty map over a decade old - Lack of ground truth highlights need for more data 23

We can differentiate different levels of poverty 2 continuous measures of wealth: Consumption expenditures Household assets We outperform recent methods based on mobile call record data Blumenstock et al. (2015) Predicting Poverty and Wealth from Mobile Phone Metadata, Science 24

Transfer Learning Inputs: daytime satellite images Feature Learning Outputs: Nighttime light intensities {Low, Medium, High} Nonlinear mapping f 1 f 2 f 4096 Target task Expenditures Assets 25

Models travels well across borders Models trained in one country perform well in other countries Can make predictions in countries where no data exists at all 26

What do we still need? Develop models that account for spatial and temporal dependencies of poverty and health 27

Take full advantage of incredible richness of images 28

A new approach based on satellite imagery We have introduced an accurate, inexpensive, and scalable approach to predicting poverty and wealth 29

30