How to generate wildfire boundary maps with Earth Engine

Google Earth
Google Earth and Earth Engine
10 min readAug 20, 2020

--

By Christophe Restif & Avi Hoffman, Senior Software Engineers, Crisis Response

Editor’s note: In 2019, we piloted a new feature in Search SOS Alerts for major California wildfires. We used satellite data to create a wildfire boundary map that helps people immediately see the approximate size and location for a given fire. Now, we’re launching this feature on Search and Google Maps in the U.S. to provide deeper insights for areas impacted by an ongoing wildfire. Here we’ll show you how we generate wildfire boundary maps using Earth Engine.

Wildfire boundary maps are now available with SOS Alerts on Search (left) and Maps (right) in the U.S.

Wildfires affect many countries and are getting more frequent and intense with climate change. In the USA alone, wildfires cost billions of dollars every year and impact people’s lives in profound ways. Mapping where they happen and how they grow provides critical information for a wide range of people. For first responders and communities near a wildfire, knowing where a blaze has been developing in recent hours can help inform evacuation orders and suppression plans. For long-term analysis and planning, looking back at where and how fires developed can improve prevention and management of future wildfires.

Google Earth Engine provides out-of-the-box tools that can identify and display the outline of a wildfire. It offers free access to public satellite image datasets focused on wildfire applications, along with computing power to process the image data, and display libraries to visualize wildfire activity.

In this post, we’ll illustrate how to put all the pieces together, going from satellite imagery to map outlines, using the Kincade fire (California, 2019) as an example. We’ll start with an overview of some of the relevant satellite imagery and then demonstrate how to turn pixelated image data into outlines of affected areas.

Using data from NOAA’s GOES satellites and Google Earth Engine, we create a digital polygon to represent the approximate wildfire impact area on Search and Google Maps.

Satellite images: insights from space

Wildfires can be detected from airborne and space-based platforms. Each technique has its pros and cons. Flying near or over an active fire can provide data with very high spatial and temporal precision. But it’s not always an option, as it requires significant operation costs and does not scale easily over entire states and countries.

Satellites offer a global alternative for wildfire imaging. They detect a wide range of reflected and emitted wavelengths from visible to infrared (IR), allowing identification of smoke, burned areas, and active fires. There has been significant research to develop accurate fire detection algorithms to distinguish whether heat picked up by IR bands is emitted by wildfire or some other source.

Image series showing smoke from the Apple Fire (California, 2020), captured by the GOES-17 satellite and produced in Earth Engine.

Keeping an eye on earth from geostationary orbit

Geostationary satellites orbit the Earth while observing the same spot. GOES 16 and 17 satellites carry the GOES ABI imager, which provides the underlying data for the FDC fire detection algorithm. The satellites offer a relatively high temporal resolution, taking images every 5 to 15 minutes. However, each satellite has a limited view that covers less than a hemisphere. And since they’re over 22,000 miles away, their spatial resolution is fairly coarse: the smallest pixels are about 2km wide at nadir, and increase with distance from nadir.

In Earth Engine, these data are provided as ee.ImageCollection(“NOAA/GOES/16/FDCF”) and ee.ImageCollection(“NOAA/GOES/17/FDCF”).

Having access to multiple geostationary satellites is a great way to reduce the spatial limitations. An area covered by two or more satellites will benefit from two independent sources of data, and covered by two different pixel geometries.

Detecting wildfire signals from polar-orbiting satellites

Polar-orbiting satellites fly at a lower altitude than geostationary satellites, and provide higher spatial resolution as a result; they also cover the entire Earth surface over time. Onboard the Suomi NPP satellite, the VIIRS instrument collects wildfire-related signals at 375m resolution. Onboard the Terra and Aqua satellites, the MODIS instrument collects wildfire-related signals at 1km resolution. On the other hand, their temporal resolution is lower, as they cover the same Earth footprint a few times per day, and transmit their data with a few hours of latency.

These polar-orbiting satellites offer important data for historical analysis and provide consistent pixel resolution for all regions of the Earth. This is particularly valuable for areas near the poles, like Alaska, where the pixel distortion from geostationary satellites can be very high.

Earth Engine provides the Thermal Anomalies and Fire Daily Global 1km dataset from the MODIS sensor on the Terra satellite as ee.ImageCollection(“MODIS/006/MOD14A1”).

Processing wildfire imagery data

In this post, we assume the intention is to delineate a single wildfire that is already known. Our input is a rough estimate of the region where it’s developing, and a rough estimate of the start and end dates. Sites providing such data include NASA Fire Information for Resource Management System (FIRMS), EC Global Wildfire Information System (GWIS), and US National Wildfire Coordinating Group (NWCG) InciWeb.

We’ll illustrate how to process wildfire imagery data using the California Kincade fire, whose region is covered by GOES 16 and 17. To keep the illustration simple, we won’t be integrating polar-orbiting data here.

Using a single satellite

The data provided for each GOES 16 and 17 are the results from the FDC algorithm. They’re generated every 15 minutes, over a grid of over 5,400 by 5,400 pixels, centered over the equator at the GOES East position at 75 degrees west longitude for GOES 16, and at the GOES West position at 138 degrees west longitude for GOES 17.

Wildfire likelihood detected from GOES 16 [left] and 17 [right]. Every frame represents a 1-hour period from Oct 25 to Oct 28, 2019. The range of colors represents the detected likelihood, from yellow (low) to orange, red, and purple (high).

The FDC algorithm provides a “fire mask code” for each pixel, with a few dozen predefined values. Here we focus on the following code values: processed fire pixel (value 10), saturated fire pixel (11), cloud contaminated fire pixel (12), high probability fire pixel (13), medium probability fire pixel (14), low probability fire pixel (15); and the corresponding “temporally filtered” code values: temporally filtered process fire pixel (30), temporally filtered saturated fire pixel (31), etc. These mask codes (10–15 and 30–35) indicate that the pixel is believed to cover a wildfire, with a varying degree of confidence.

Since we already know which region and time range to look at, we start by filtering the image collection by date and bounds.

// Time and location of the fire.
var kincade = {
longitude: -122.8,
latitude: 38.7,
start: '2019-10-23',
end: '2019-11-06',
};
// Region of interest.
var radius_of_interest_meters = 40000;
var area_of_interest = ee.Geometry.Point([kincade.longitude, kincade.latitude])
.buffer(radius_of_interest_meters);
// Satellite data.
var goes_16_data = ee.ImageCollection('NOAA/GOES/16/FDCF')
.filterDate(kincade.start, kincade.end)
.filterBounds(area_of_interest);
var goes_17_data = ee.ImageCollection('NOAA/GOES/17/FDCF')
.filterDate(kincade.start, kincade.end)
.filterBounds(area_of_interest);

The next step is to convert each mask code to a confidence value, between 0 and 1. Note that these values are arbitrary and for illustrative purposes. In Earth Engine, this is a map operation.

// Conversion from mask codes to confidence values.
var fire_mask_codes = [10, 30, 11, 31, 12, 32, 13, 33, 14, 34, 15, 35];
var confidence_values = [1.0, 1.0, 0.9, 0.9, 0.8, 0.8, 0.5, 0.5, 0.3, 0.3, 0.1, 0.1];
var default_confidence_value = 0;
var map_from_mask_codes_to_confidence_values = function(image) {
return image
.clip(area_of_interest)
.remap(fire_mask_codes, confidence_values, default_confidence_value);
};
var goes_16_confidence = goes_16_data
.select(['Mask'])
.map(map_from_mask_codes_to_confidence_values);
var goes_17_confidence = goes_17_data
.select(['Mask'])
.map(map_from_mask_codes_to_confidence_values);

As the wildfire evolves and gets extinguished, mask codes of a given pixel will change to reflect wildfire status over the time range we’ve filtered. The time series of mask code values contains a lot of valuable information. In this post, we’re interested in generating the outline of the affected area throughout the event. As a result, we’re going to summarize the time series with a single value: the maximum confidence that was assigned to each pixel over the wildfire time range. This is done in Earth Engine with a reduce operation, using the max reducer operator. Note that other operators would give different insights into the temporal evolution of the wildfire.

var goes_16_max_confidence = goes_16_confidence
.reduce(ee.Reducer.max());
var goes_17_max_confidence = goes_17_confidence
.reduce(ee.Reducer.max());

We can visualize that initial data processing step from each satellite, using:

var affected_area_palette = ['white', 'yellow', 'orange', 'red', 'purple'];
Map.centerObject(area_of_interest, 9);
Map.addLayer(area_of_interest,
{color: 'green'},
'Area of interest', true, 0.2);
Map.addLayer(goes_16_max_confidence,
{opacity: 0.3, min: 0, max: 1, palette: affected_area_palette},
'GOES 16 maximum confidence');
Map.addLayer(goes_17_max_confidence,
{opacity: 0.3, min: 0, max: 1, palette: affected_area_palette},
'GOES 17 maximum confidence');
Maximum confidence pixels seen from GOES 16 [left] and GOES 17 [right] over the time range and the region around the Kincade fire. Color scales from yellow to purple as confidence increases from 0 to 1; much of the fire-affected area is considered high confidence.

Combining two satellites

Since GOES 16 and 17 cover the same area with the same temporal frequency, it is possible to combine their data in a fairly straightforward way. It’s worth noting that the pixels they observe do not coincide. Instead, the intersection of pixels from GOES 16 with those from GOES 17 give a new pixel grid at a smaller grain.

Intersection of pixels from GOES 16 (in red) and 17 (in blue) over the area of interest; shading has been added to every other pixel to improve interpretation.

In that new grid, each fragmented pixel receives one observation from GOES 16 and another from GOES 17, independent of each other. That gives us an advantage: while GOES 16 may mark an entire pixel at a certain mask value, GOES 17 may use a different value for part of that pixel. This essentially increases the spatial resolution of the combined data.

Once again, there are several ways to reduce the two numbers per pixel to a single one. In this post we keep the minimum value — the confidence of a wildfire being in a fragmented pixel is the lowest value measured from GOES 16 and 17. Here as well, other operators would give different insights.

var combined_confidence = ee.ImageCollection([goes_16_max_confidence,
goes_17_max_confidence])
.reduce(ee.Reducer.min());
Map.addLayer(combined_confidence,
{opacity: 0.3, min: 0, max: 1, palette: affected_area_palette},
'Combined confidence');
The confidence of the wildfire-affected area after combining the two satellites’ data.

Generating the final outline

Smoothing the satellite pixels

At this point, the processed data looks like a patchwork of polygonal regions, having sharp corners and containing a constant confidence value each. We can smooth both the polygonal corners and the step-like variations of confidence in a single operation, called ee.Image.reduceNeighborhood in Earth Engine. It requires choosing a shape, known as a kernel, which will define a region of interest around each pixel, and a reducer operator, which will combine all the values in that region of interest into a single new value. Earth Engine offers a range of built-in kernels (such as ee.Kernel.circle) and reducers (such as ee.Reducer.mean). Each kernel shape and size and each reducer will produce a different smoothed image

var kernel = ee.Kernel.square(2000, 'meters', true);
var smoothed_confidence = combined_confidence
.reduceNeighborhood({'reducer':ee.Reducer.mean(),
'kernel': kernel,
'optimization': 'boxcar',});
Map.addLayer(smoothed_confidence,
{opacity: 0.3, min: 0, max: 1, palette: affected_area_palette},
'Smoothed confidence');
The confidence of the wildfire-affected area (from yellow to purple) after smoothing the combined values

Converting to an outline

After the smoothing operation above, the processed data is still an image of confidence values. A straightforward approach is to use the ee.Image.gt operation to keep the parts of the image having confidence above a chosen threshold value.

var high_confidence = smoothed_confidence
.gt(0.6);
Map.addLayer(high_confidence,
{opacity: 0.3, min: 0, max: 1, palette: affected_area_palette},
'High confidence');
The confidence of the wildfire-affected area after thresholding the smoothed values.

The next step is to convert the resulting binary image into an outline — typically generating a multi-polygon in case there are multiple regions in the image above the threshold. In Earth Engine, this is achieved with the ee.Image.reduceToVectors operation.

var affected_areas = high_confidence
.reduceToVectors({scale: 200, // 200 m/pixel
maxPixels: 1e10,
geometry: area_of_interest})
.filter(ee.Filter.eq('label', 1));
var affected_areas_outline = ee.Image().byte()
.paint({featureCollection: affected_areas,
width: 2});
Map.addLayer(affected_areas_outline,
{palette: 'purple'},
'Affected areas', true, 0.3);
The outline of the wildfire-affected area around the thresholded confidence values.

Smoothing the final outline

The reduceToVectors operation typically generates outlines with staircase-like edges. Depending on the scale parameter used in that operation, the staircase steps will be more or less pronounced.

Closeup of the wildfire-affected area, showing the staircase-like edges.

One way to smooth such edges is to use Earth Engine ee.Feature.simplify operation, specifying an acceptable error margin — the greater the error margin, the simpler the shape.

var smooth = function(feature) {
var max_error_meters = 500;
return ee.Feature(feature).simplify(max_error_meters);};
var affected_areas_smoothed = ee.FeatureCollection(affected_areas)
.map(smooth);
var affected_areas_smoothed_outline = ee.Image().byte()
.paint({featureCollection: affected_areas_smoothed,
width: 2});
Map.addLayer(affected_areas_smoothed_outline,
{palette: 'purple'},
'Smoothed affected areas', true, 0.3);
Closeup of the smoothed wildfire-affected area [left] | Final outline of the wildfire-affected area [right].

There are alternative ways to smooth such polygons, including averaging neighboring vertices. Each method will produce a slightly different smoothed outline.

Evaluating your results

We described how to use Earth Engine’s public datasets and available libraries to compute the outlines around the affected areas of a known wildfire. There are several parameters to choose along the way, each affecting the final outcome. This is a good reminder that the generated outlines can only match the reality on the ground within a certain spatial and temporal resolution. Evaluating the quality of the results and tuning the parameters is an important task, which can rely on publicly available official outlines datasets, such as those provided by the US National Interagency Fire Center (NIFC).

We hope learning more about how we approached this wildfires project will inspire others to work with tools like Earth Engine and publicly available Earth observation data to develop new and innovative approaches to pressing global issues.

--

--