Mapping floods with forests: Creating time series flood maps through Random Forest modeling

Emine Fidan

North Carolina State University

Co-Authors: N. Nelson

Flood maps are often developed using remotely sensed imagery and high-water mark data to produce static maps of peak flooding conditions. Few approaches exist for generating time series of flood dynamics due to trade-offs in the spatial and temporal resolutions of satellite imagery, lack of hydrologic in situ measurements, and challenges associated with modeling flood dynamics in low topography landscapes. This project addresses the existing gap in our capacity to generate flood time series using a dynamic model framework that consists of delineating flood waters from remotely sensed Sentinel-1 radar imagery, building a Random Forest machine learning model, and applying the model at a daily time-step. Predictors in the Random Forest model included daily precipitation observations and geospatial data that captured twelve biophysical and socioeconomic variables (e.g. land cover, elevation, social vulnerability, and population) at the watershed scale. The dynamic model framework was developed in the context of eastern North Carolina, which experienced severe flooding due to Hurricane Florence in September 2018. Results from this work included quantified metrics of the relative importance of individual predictor variables, as well as an evaluation of model accuracy and predictability for future model iterations.

Author E-mail
eneminef@ncsu.edu

Please post comments and questions for the author below.

4 thoughts on “Mapping floods with forests: Creating time series flood maps through Random Forest modeling

    1. Thanks, Dr. Ayers! And that’s a great point you make- typically in these types of models one will adjust/eliminate the predictor variables that have the low importance in the model as to make the model more efficient. However, in my case it didn’t make sense to remove any variables or weigh them differently just yet since my model wasn’t performing well.
      To elaborate, after this first iteration of my model I saw that my model’s predictability (i.e. its agreement shown by the Cohen’s kappa value) was not performing ideally, but I calculated the predictor variable importance anyways to get a sense of the variable performance. Since my model wasn’t performing too well, it seemed unjustified to adjust weights or remove variables just yet, but it was interesting to note which variables out performed others. For my next model iteration, my training data quality is much better and so I expect a much better model performance, which will allow me to make better judgements on handling the predictor variables. I hope this shed some light on my handling of candidate predictors.

      Like

  1. Good work Emine! I am interested in seeing how we could use your approach down here in Florida. In addition to your flood extent classification, are you considering extending your work to flood depth maps?

    Like

    1. Thanks, Dr. Arias! I’d be interested to see how my model performs in different landscapes, such as in Florida’s complex hydrological landscape! Additionally, the future applications of this model are 1.) an analysis of the model predictor variable performance; 2.) the calculation of floodwater contact time (i.e. flood duration), which will be used for an analysis of water quality dynamics in floodwaters; and 3.) a methodology in calculating of flood risk. I currently don’t have an active plan for developing flood depth maps, but that’s not to say that it won’t happen!

      Like

Leave a Reply to Emine Fidan Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: