Under the thick forest of Mexico’s Yucatán Peninsula, the ancient ruins of a Maya City have been uncovered with the use of remote sensing.
Of course, that wasn’t the outcome that Woodwell Climate’s Chief Scientific Officer, Dr. Wayne Walker, anticipated when he and his team collected and processed the remote sensing dataset for an unrelated project nearly a decade ago.
Walker’s team was mapping the region as part of the Mexico REDD+ project, a collaborative, international effort to explore strategies for reducing emissions from deforestation and degradation in the country. Using a remote sensing technology called LiDAR, which scans terrain from a low-flying plane using pulses of laser light, Walker and project collaborators created a comprehensive map of forests—and the carbon they contain—across Mexico.
Walker and team coordinated the flights and processed the raw data for use in the project, uploading it afterwards to a website for public use. But, once the project ended, he all but forgot about the effort, apart from responding occasionally to researchers interested in downloading the dataset for their own work.
One of those researchers was Luke Auld-Thomas, a PhD candidate at Tulane University researching the Classic Maya civilization, which thrived in the Yucatan until the 9th century when much of the region was abandoned, though their culture and languages persist to this day. Because of its unique ability to provide a detailed three-dimensional picture of whatever features are present on the ground, LiDAR imagery is an incredibly powerful tool for a multitude of purposes, from climate science to archaeology. And while the Mexico REDD+ project was interested in documenting the forests, Auld-Thomas was interested in what might be hidden beneath them.
“One scientist’s noise is another’s entire field of study,” says Walker. “In our other projects, like Climate Smart Martha’s Vineyard, we see historical structures like stone walls that aren’t necessarily meaningful to our work but could be of interest to archaeologists.”
In Mexico, the large areas surveyed by Woodwell Climate revealed not just individual human-built structures, but the plazas, reservoirs, and ball courts of an entire, previously undocumented city. The discovery, published in the journal Antiquity, supported the theory that the region was, in fact, densely settled during the height of Classic Maya civilization.
“We knew that it was close to a lot of interesting sites and settlements— areas of large-scale landscape modification that had been mapped and studied— but none of the survey areas themselves were actually places that archeologists ever worked, making it a really exciting sample to work with,” said Auld-Thomas.
Auld-Thomas had specifically been on the hunt for a pre-existing LiDAR dataset like the one Walker helped create— a survey conducted for completely non-archaeological purposes and therefore free of any biases. Essentially a “random sample” of the region. That randomness, and the subsequent discovery of an entire city, allowed Auld-Thomas and his colleagues to more strongly argue their point about intense urbanization in the Yucatán.
“If you’re only going to places where you know there’s going to be something, then of course, you’re going to find something significant, right? These random samples, not collected for archeological purposes, are gold in some respects,” said Dr. Marcello Canuto, who co-authored the paper. Canuto directs the Middle American Research Institute at Tulane, where the research for this study was conducted.
The unexpected outcome of the LiDAR survey offers a textbook example of the value of open data access. Sharing data and resources both within and between fields of science can jumpstart discovery and distribute the costs of an otherwise expensive data collection effort.
“Just look at what came out of the moonshot,” says Canuto. Thousands of technologies, developed in humanity’s pursuit of the moon landing, have found unforeseen applications in today’s world— including LiDAR.
“Certainly, many of us have produced datasets that have led to incremental advances in closely related fields,” says Walker. “But here is a special case of open source data advancing discovery in an entirely unrelated field of study.”
Advancements across fields continue to better our understanding of the world around us. And the lessons learned from a civilization like the Maya have very real parallels to today’s climate crisis.
As Auld-Thomas and Canuto show, the Maya densely settled the Yucatán Peninsula, maxing out the capacity of the surrounding environment to support their population. And then the regional climate shifted. A long-term drought settled in, resources became scarcer, governments became unstable, people started leaving the cities, and the infrastructure of the larger civilization collapsed.
“The reason environmental scientists collect LiDAR data of the forest, is that they are trying to understand environmental processes in order to help human societies conserve the landscape,” says Auld-Thomas. “As archaeologists, we try to understand how people in these exact environmental contexts have confronted deforestation and climate change and all of these other things before.”
For Canuto, the lesson to be learned lies not just in the environmental perils, but in the societal ones. Because what complex societies hate— be they the Classic Maya or today’s modern culture— is a lack of predictability. If a system cannot adapt, it will fail.
“The collapse was more than just climate change,” says Canuto. “It was a failure of a political system to respond to climate change.”
“It’s been around a long time, actually,” muses Senior Scientist, Dr. Jennifer Francis. “It’s gotten more sophisticated, sure, and a lot of the applications are new. But the concept of artificial intelligence is not.”
Dr. Francis has been working with it for almost two decades, in fact. Although, back when she started working with a research tool called “neural networks,” they were less widely known in climate science and weren’t generally referred to as artificial intelligence.
But recently, AI seems to have come suddenly out of the woodwork, infusing nearly every field of research, analysis, and communication. Climate science is no exception. From mapping thawing Arctic tundra, to tracking atmospheric variation, and even transcribing audio interviews into text for use in this story, AI in varying forms is woven into the framework of how Woodwell Climate creates new knowledge.
The umbrella term of artificial intelligence encompasses a diverse set of tools that can be trained to do tasks as diverse as imitating human language (à la ChatGPT), playing chess, categorizing images, solving puzzles, and even restoring damaged ancient texts.
Dr. Francis uses AI to study variations in atmospheric conditions, most recently weather whiplash events— when one stable weather pattern suddenly snaps to a very different one (think months-long drought in the west disrupted by torrential rain). Her particular method is called self-organizing maps which, as the name suggests, automatically generates a matrix of maps showing atmospheric data organized so Dr. Francis can detect these sudden snapping patterns.
“This method is perfect for what we’re looking for because it removes the human biases. We can feed it daily maps of, say, what the jetstream looks like, and then the neural network finds characteristic patterns and tells us exactly which days the atmosphere is similar to each pattern. There are no assumptions,” says Dr. Francis.
This aptitude for pattern recognition is a core function of many types of neural networks. In the Arctic program, AI is used to churn through thousands of satellite images to detect patterns that indicate specific features in the landscape using a technique originally honed for use in the medical industry to read CT scan images.
Data science specialist, Dr. Yili Yang, uses AI models trained to identify features called retrogressive thaw slumps (RTS) in permafrost-rich regions of the Arctic. Thaw slumps form in response to subsiding permafrost and can be indicators of greater thawing on the landscape, but they are hard to identify in images.
“Finding one RTS is like finding a single building in a city,” Dr. Yang says. It’s time consuming, and it really helps if you already know what you’re looking for. Their trained neural network can pick the features out of high-resolution satellite imagery with fairly high accuracy.
Research Assistant Andrew Mullen uses a similar tool to find and map millions of small water bodies across the Arctic. A neural network generated a dataset of these lakes and ponds so that Mullen and other researchers could track seasonal changes in their area.
And there are opportunities to use AI not just for the data creation side of research, but trend analysis as well. Associate Scientist Dr. Anna Liljedahl leads the Permafrost Discovery Gateway project which used neural networks to create a pan-Arctic map of ice wedge polygons—another feature that indicates ice-rich permafrost in the ground below and, if altered over time, could suggest permafrost thaw.
“Our future goals for the Gateway would utilize new AI models to identify trends or patterns or relationships between ice wedge polygons and elevation, soil or climate data,” says Dr. Liljedahl.
The projects above are examples of neural-network-based AI. But how do they actually work?
The comparison to human brains is apt. The networks are composed of interconnected, mathematical components called “neurons.” Also like a brain, the system is a web of billions upon billions of these neurons. Each neuron carries a fragment of information into the next, and the way those neurons are organized determines the kind of tasks the model can be trained to do.
“How AI models are built is based on a really simple structure—but a ton of these really simple structures stacked on top of each other. This makes them complex and highly capable of accomplishing different tasks,” says Mullen.
In order to accomplish these highly specific tasks, the model has to be trained. Training involves feeding the AI input data, and then telling it what the correct output should look like. The process is called supervised learning, and it’s functionally similar to teaching a student by showing it the correct answers to the quiz ahead of time, then testing them, and repeating this cycle over and over until they can reliably ace each test.
In the case of Dr. Yang’s work, the model was trained using input satellite images of the Arctic tundra with known retrogressive thaw slump features. The model outputs possible thaw slumps which are then compared to the RTS labels hand-drawn by Research Assistant Tiffany Windholz. It then assesses the similarity between the prediction and the true slump, and automatically adjusts its billions of neurons to improve the similarity. Do this a thousand times and the internal structure of the AI starts to learn what to look for in an image. Sharp change in elevation? Destroyed vegetation and no pond? Right geometry? That’s a potential thaw slump.
Just as it would be impossible to pull out any single neuron from a human brain and determine its function, the complexity of a neural network makes the internal workings of AI difficult to detail—Mullen calls it a “black box”—but with a large enough training set you can refine the output without ever having to worry about the internal workings of the machine.
Despite its reputation in pop culture, and the uncannily human way these algorithms can learn, AI models are not replacing human researchers. In their present form, neural networks aren’t capable of constructing novel ideas from the information they receive—a defining characteristic of human intelligence. The information that comes out of them is limited by the information they were trained on, in both scope and accuracy.
But once a model is trained with enough accurate data, it can perform in seconds a task that might take a human half an hour. Multiply that across a dataset of 10,000 individual images and it can condense months of image processing into a few hours. And that’s where neural networks become crucial for climate research.
“They’re able to do that tedious, somewhat simple work really fast,” Mullen says. “Which allows us to do more science and focus on the bigger picture.”
Dr. Francis adds, “they can also elucidate patterns and connections that humans can’t see by gazing at thousands of maps or images.”
Another superpower of these AI models is their capability for generalization. Train a model to recognize ponds or ice wedges or thaw slumps with enough representative images and you can use it to identify the water bodies across the Arctic—even in places that would be hard to reach for field data collection.
All these qualities dramatically speed up the pace of research, which is critical as the pace of climate change itself accelerates. The faster scientists can analyze and understand changes in our environment, the better we’ll be able to predict, adapt to, and maybe lessen the impacts to come.