By visualizing the potential consequences of a hurricane on residential properties before it makes landfall, individuals can better prepare and make informed decisions about evacuation plans.
Researchers at MIT have created a method generating computer-generated images of satellite TV feeds from the future, depicting how an area would fare in the event of a potential flood. A novel approach combines a generative synthetic intelligence model with a physics-based flood model to produce realistic, bird’s-eye-view images of an area, depicting areas likely to experience flooding based on the intensity of an impending storm.
As a team took a strategic look at the scenario, they utilized their approach to visualize Houston’s future landscape through satellite imagery, depicting how certain areas of the city might appear following a storm like Hurricane Harvey, which devastated the region in 2017. In stark contrast, the team compared the artificially generated images to exact satellite captures taken of the same regions following Harvey’s devastating impact. In comparison, they produced AI-generated images that failed to incorporate a realistic physics-based flood simulation model.
Using a physics-based approach, the team developed satellite imaging technology that produced highly accurate and realistic visuals of potential flood scenarios. Unlike traditional methodologies, the AI-only approach created images of floods in areas where flooding is physically impossible?
The methodology employed by the crew serves as a proof-of-concept, showcasing the potential of generative AI models combined with physics-based simulations to produce reliable and practical content. To expand its applicability, the strategy must be trained on a diverse range of satellite images, enabling it to accurately depict flooding scenarios in various areas for future storm events?
Björn Lütjens, a postdoctoral researcher at MIT’s Department of Earth, Atmospheric and Planetary Sciences, notes that the concept could be implemented before a hurricane hits, providing an additional visualization layer for the public, thereby enhancing situational awareness. One of the most significant hurdles is convincing people to leave a potentially hazardous situation. This could potentially serve as an additional visualisation tool to enhance the preparedness.
To demonstrate the groundbreaking potential of their innovative methodology, dubbed the “Earth Intelligence Engine,” the team has developed a user-friendly, web-based platform for others to explore and build upon.
Researchers at MIT, including co-authors Brandon Leshchinskiy, Aruna Sankaranarayanan, and Professor Dava Newman, who heads the AeroAstro department and directs the MIT Media Lab, collaborated with experts from various institutions.
The latest study represents a significant expansion of the team’s ongoing research into leveraging generative AI tools to visualize future climate scenarios.
According to the study’s senior author, offering a highly localized view of weather patterns seems to be the most effective approach for communicating scientific findings. Residents intuitively connect with their specific geographic context, which is deeply tied to their personal experiences and relationships within a familiar neighborhood or community. Simulating localized weather conditions delivers intimate, personalized, and relevant experiences.
The study employs a conditional generative adversarial network (GAN), a type of machine learning method that produces realistic images by pitting two neural networks against each other in a competitive, or “adversarial,” process. The core generator community is educated on contrasting real-world scenarios, analogous to comparing before-and-after images of a region affected by a hurricane through satellite television broadcasts. Subsequently, the secondary “discriminator” community receives training to discern between authentic satellite TV imagery and the artificially generated outputs produced by the initial community.
Communities constantly refine their performance by leveraging insights and best practices shared with others. The idea is that this adversarial dynamic should ultimately yield artificially generated images that are virtually indistinguishable from their real-world counterparts. Despite their capabilities, GANs can still generate “hallucinations,” or factual inaccuracies in a seemingly realistic image that shouldn’t be present.
“Hallucinations can deceive viewers,” notes Lütjens, leading him to ponder whether these misperceptions might actually be mitigated by leveraging generative AI tools to enhance decision-making, particularly in high-risk environments where accuracy is crucial. How can we leverage generative AI models to drive climate action when reliable information is crucial?
Researchers conceptualized a high-stakes scenario where generative AI was entrusted with generating satellite TV-quality images of potential future floodings, accurate enough to inform decisions on how to prepare and potentially evacuate people from harm’s way.
Policymakers can often gain insight into potential flood-prone areas by analyzing visually striking, color-coded maps that provide a comprehensive overview of affected regions. These maps are the culmination of a complex pipeline, originating from a hurricane model, which feeds into a wind model simulating the pattern and energy of winds across an area. This is combined with a flood or storm surge model that forecasts how wind-driven waters may inundate nearby coastlines. A hydraulic model simulates flood scenarios by mapping the region’s flood infrastructure, yielding a visual, colour-coded map displaying flood elevations within a designated area.
The inquiry is whether visualizations of satellite TV imagery can introduce another layer of tangibility and emotional resonance beyond a color-coded map of reds, yellows, and blues while remaining reliable, according to Lütjens.
The team initially explored the potential of standalone generative AI to generate satellite imagery depicting predicted flood scenarios. Researchers trained a Generative Adversarial Network (GAN) on high-definition images of the Houston area captured by satellites before and after Hurricane Harvey, using satellite TV pictures. Upon re-tasking the generator to produce fresh flood images of identical regions, investigators found that the resulting pictures bore a striking resemblance to conventional satellite imagery. Closer inspection, however, revealed aberrations in some photos, manifesting as floods in areas where flooding was geographically implausible – for instance, in elevated regions.
To enhance the credibility of AI-generated images and reduce hallucinations, researchers combined a Generative Adversarial Network (GAN) with a physics-based flood model featuring realistic parameters and phenomena, such as hurricane trajectory, storm surge, and flood patterns. By integrating this physics-reinforced approach, they produced satellite images around Houston that precisely replicate the predicted flood extent, pixel by pixel, as forecasted by the flood model.
As we integrate machine learning with physics to tackle a high-stakes scenario, we must first grasp the intricacies of our planet’s systems, forecasting potential outcomes to ensure people remain safe from harm. “We’re eager to deploy our generative AI tools directly into the hands of frontline decision-makers at the community level, poised to make a profound impact and potentially save lives.”
The analysis received partial support from the MIT Portugal Program, the DAF-MIT Synthetic Intelligence Accelerator, NASA, and Google Cloud.