Passionate about the natural environment, machine learning and remote sensing, my principal focus is on Deep Learning using Remote Sensing applied to Flood Mapping and other environmental and societal challenges.
With a background in Environmental and Computational Engineering and Sciences, from the Swiss Federal Institute of Technology Lausanne, I had an education which allowed me to marry modeling with environmental problems and its applications.
My PhD in theoretical ecology at the Laboratory of Ecohydrology led by Prof. A. Rinaldo, allowed me to get a deep understanding in mathematical modelling applied to process based models, all while furthering my knowledge in probabilities, statistics, and machine learning models.
Currently working in the Social [Pixel] Lab led by Prof. E. Tellman Sullivan at the University of Arizona, I am expanding my understanding of Deep Learning methods applied to floods and other challenging problems linked to the environment and its impact on society.
In my spare time, I co-founded the company Early Coffee Games with a good friend of mine. We have been active in the game development scene for about 10 years, and are currently developing our first commercial game, Hermit: an Underwater Tale. The game has received several grants for its development and promotion.
In my free time, I enjoy mountaineering, skiing (when I'm not living in the desert), road and mountain biking, board and video games, movies, music (especially music festivals, I used to organise one), and last but not least, cheese, coffee and beer.
The common theme throughout my work has been mathematical modelling of environmental challenges, be it as it currently is flood mapping with deep learning, or previously process based models of population dynamics.
In both my current and past work, optimisation and inference have played a major role.
Presented here are chosen topics of my current and past work (wip).
My current work focuses on developing novel Deep Learning solutions to Remote Sensing products in order to address environmental and societal challenges, mainly, but not exclusively, flood mapping.
Presented here is my main work on flood mapping.
Historical Mapping of Bangladesh Floods - MODIS / Sentinel-1 DL Fusion
Economic impacts of floods push people into poverty and cause setbacks to development as government budgets are stretched and people without financial protection are forced to sell assets.
Accurate return period estimates of flood events are paramount to develop robust insurance products.
We here apply a deep learning algorithm (combined Long-Short-Term-Memory (LSTM) and Convolutional Neural Network (CNNs)) approach.
Click to Read On
By combining historical MODIS satellite images with modern Sentinel-1 images, we produce reliable return period estimates.
Schematic representation of the proposed framework. Based on a Sentinel-1 derived flood map, aggregated at 500 meter resolution, we train a fusion model which infers the fraction of flooded area in each pixel based on MODIS features.
The proposed framework takes advantage of the ability of radar to "see" water even through clouds, one of the major challenges of flood mapping.
Details of Fusion model consisting in a series of Convolutional Neural Networks (CNNs) applied to MODIS 8-days composite images, passed through a Long Short Term Memory (LSTM) network, and later merged with the MODIS image at time t, to finally go through a last CNN.
By crating a model able to fuse MODIS 8-days composite images with Sentinel-1 derived fraction of flooded area, we infer a 20 years long historical time series over Bangladesh based on the 20 years long MODIS time series.
The data to train, test and validate the model is comprised of every 32x32 500 meters resolution chip where MODIS 8days composite and Sentinel-1 fully overlap. Sentinel-1 is available since 2017.
The proposed model takes advantage of both the spatial and temporal information contained in MODIS images.
For each time step, the n previous time steps are considered to make a prediction on the fraction of flooded area, leveraging the temporal dynamics of floods.
Each image first goes through a Convolutional Neural Network (CNN), extracting the spatial information and compiling the multi bands information into a single value for each pixel.
This generates a series of values for each pixel, which is then passed through a Long Short Term Memory (LSTM) network designed to, through a series of gates, predict the probability of flood to be present.
Finally, the output of the LSTM is combined with the MODIS composite image at time t and passed through a final CNN, generating a prediction for the fraction of flooded area for each pixel in the image.
Observed and Inferred fraction of flooded area, result of deep learning fusion between MODIS and derived Sentinel-1 data, along with error (difference) and MODIS False Color Composite (FCC), for a chosen date selected out of the testing Set (Mai. 5 2020).
Error distribution between the observation (Sentinel-1 algorithm) and Model for the full testing set.
Observed and Inferred time series of fraction of flooded area for all of Bangladesh for the training and testing set. This model is used for the historical time series inference (see below).
For the testing, a single year (2018) is completely removed from the dataset.
The result of the developed fusion model is shown here for a single day (Mai 5 2020) on all available chips for this day, and as a time series for all of Bangladesh for all dates where chips are present, as examples of inference on the testing dataset.
var R2 of 0.66 shows a good agreement between the observed and modelled data.
The time series shows that the low water values during the dry season are well modeled, and the peaks during the monsoon and irrigation season correctly reproduced.
Infered historical time series of fraction of flooded area from 2001 to 2021, produced with the fusion algorithm and based on the historical MODIS data.
This result permits us to run inference on the historical data in order to produce an estimate for 2001-2020.
It floods in the desert! The North American Monsoon brings intense storms and sometimes flash floods each summer. 2021 was the third wettest on record in Tucson (12.79 inches).
Understanding where and when these floods happen can be challenging, and requires large scale monitoring.
Can satellite images provide insights into the flood extents and water pathways? Given the revisiting periods of satellites and large cloud cover during and around monsoon flood events, using these images can be challenging.
Monsoon clouds gather over the cactus forest in the Saguaro National Park West, Wednesday, August 10, 2016. Kelly Presnell / Arizona Daily Star
Click to Read On
This project is done in collaboration with the Pima County Government, where we are currently assessing the utility of satellite imagery to detect floods in the desert, in particular the advantage of high frequency PlanetScope commercial data.
Google Earth Engine applet developed to visualise the daily precipitation (based on the North American Land Data Assimilation System (NLDAS)) over the different watersheds.
Flash floods in the desert during the monsoon season recess rapidly, and capturing them with satellite images appears difficult.
PlanetScope images, with their near global and near daily images, provide an interesting product to detect the differences between consecutive images, as often only wet soil or sand remains as witness of the flood itself.
In this context, I have developed a Google Earth Engine Applet to visualize the daily precipitation over Pima county, in an effort to aid the selection of flood events to be modelled, and hint towards their location in space (c.f. figure).
Additionally, for another project, in collaboration with a PhD student, Alex Saunders, we developed a tool to map the available satellite images around a specific date. This is particularly useful here as it allows us to select flood events where satellite imagery has a potential (c.f. figure).
Satellite image availability around a specified date for a given RoI, automatically retrieved. Built on Google Earth Engine and PlanetScope API in collaboration with Alex Saunders.
I have contributed to hand labeling flood events and developed machine learning algorithms to classify the wet sand we observed.
One of the promising methods we developed is based on selecting two consecutive PlanetScope images, one on the morning of the event, and one on the day after.
FCC PlanetScope images before and after the event and initial results of the random forest algorithm along with naive NDWI thresholding. Figure Credit: Rohit Mukherjee
This permits to identify the changes in the landscape, and seems to be most promising with the algorithms we developed.
This project proposes to create a high-resolution dataset of flood events labelled on PlanetScope imagery. The main goal of the project is to understand if publicly available data can be as effective in detecting floods as commercial data.
Locations of the different labelled flood events, with their corresponding publicly available datasets. Figure credit: Zhijie Zhang
Click to Read On
One of the main outputs of this work is to publish a publicly available high resolution flood dataset. The selected flood events are based on existing flood event datasets (xBD, Sen1FLoods11, NASA IMPACT, FloodPlanet (in house)), in order to be able to compare with other lower resolution datasets, but also label a few new events.
Example of chip selection for a flood event: the blue dots represent where a full chip (with no missing data) is available, the colored chips then represent a first sampling meant accurately reproducing the distribution of the overall dataset, done via Latin Hypercube Sampling, the chips are then clustered based on spatial distribution via k-means (the different colors), and finally within these clusters, one chip is selected to be furthest apart from all other chips. A total of 15-20 chips per event are selected for the hand labelling process (depending on the quality of the chips).
Example of two PlanetScope chips (False Color Composite (FCC)) and the hand labelled data. Figure credit: Zhijie Zhang
In the context of this work, I have mainly been involved in writing a pipeline to download and process PlanetScope scenes into chips, from which we sample a representative subset which is then labelled (c.f. figure above).
The events have been selected to coincide in time with available Harmonized Landsat Sentinel-2 (HLS) or Sentinel-1 data, such that deep learning models can be developed to compare the commercial vs public data source.
For each event, the selected chips have been hand labelled with three classes: no water, low confidence water, high confidence water (c.f. figure).
The data processing and labelling of all events has been completed, and we are currently working on releasing the dataset along with baseline model results (deep learning both on public and commercial data).
Background image: Selection of chips of PlanetScope False Color Composite, overlaid with flood labels for a flood event in Bolivia on Feb. 15 2018.
Spatiotemporal dynamical modelling of ground beetle presence in mountains based on Earth Observation data
Mountain areas represent a challenging environment to efficiently monitor species over extended periods of time in a large area.
To make informed decisions, park managers in protected areas may require knowledge about the health, presence, and dynamics of species in these areas.
To accomplish this, they may need to gain further information beyond the sampled data, extrapolating the data beyond the observed and integrating population dynamics for near-term forecasting of species presence.
Click to Read On
In order to accurately model these species, it is of crucial importance to understand their drivers for presence and dynamics.
Schematic representation of the proposed framework: develop an applied metapopulation framework to reproduce observed dynamics of mountain species in space and time driven by Earth Observation and calibrated on Insitu Observations.
The goal of this study is to develop an applied metapopulation framework to reproduce observed dynamics of mountain species in space and time, driven by Earth Observation and calibrated on Insitu Observations.
A metapopulation is commonly described as a population of populations of a single species , i.e. a population comprised of multiple subpopulations, called local populations inhabiting the same landscape, but in distinct geographical locations. The metapopulation concept is an abstraction of the concept of population, where an ensemble of interacting individuals is considered to a higher level, and where interacting subpopulations are considered.
The specific metapopulation model used here is called Spatially-Explicit Stochastic Patch Occupancy Model (SPOM).
Building on a grid with N cells, SPOM computes a possible distribution of occupied cells at every simulation time t by considering extinction and colonisation processes, whose rates depend on the species properties and on the landscape features. A binary state variable wi (t) is set to 1 when the cell i is occupied and 0 when empty (i = 1, ... , N).
Starting from a given initial distribution of occupied cells, at each time step, the model allows unoccupied cells to be colonised by surrounding occupied cells with a certain probablity.
Then, the cell becomes occupied at time t + ∆t depending on a random sample from a Bernoulli distribution. Similarly, species in occupied cells can go extinct with a certain probability. SPOM works as a Markov chain, where, for each cell, the probabilities of colonisation and extinction events are modelled depending on presence and conditions.
Static and dynamic Earth Observation data used in the study to characterise the niche of the species in each pixel.
In the context of this work, the colonisation and extinction probabilities are based on using Earth Observation data to characterise the niche of the species.
The factors are combined in a non-linear logistic function to best describe the niche.
The different EO factors are selected to best describe the environmental conditions relevant for the modeled species.
Game development started for me as a silly distraction and creative outlet in 2014 when Tristan Thévenoz approached me. Tristan is a classically educated artist, proficient in multiple forms of expression and passionate about the creative process. His love of video games led him to dabble in pixel art in 2013, and he was then looking for somebody to work with, on the technical aspect. This is when he approached me, and we soon became friends and started working on one silly project after the other.
In our early days, we had the chance to get approached by an artist called François Burland, who was interested in hiring us to create video games for his arts installation. Over the course of three years, under the umbrella of Sharped Stone Studios , we participated in two different arts installation, each time trying to incorporate the essence of the artists' vision we were working with:
SuperNova (2017), set in Martigny, a small town in the middle of the Alps in Switzerland, the exhibition focused on cultural landmarks in and around Martigny, reinterpreted by various artists. We developed two games dedicated to the pizza places and roundabouts, both very abundant in Martigny.
Atomic Bazar (2018 - 2019), set in Fribourg, this exhibition was set around François' vision of the cold war. We developed four games, meant to encapsulate his vision and art into games.
Early Coffee Games
Early Coffee Games is a game development company based in Switzerland we created in 2020 with Tristan. The company is our step into semi-professional game development.
Since its creation, the company can note a couple of successes:
In 2021 the museum Espace Jean-Tinguely Nicki de Saint Phalle contracted us to create a unique art piece complementing their exhibition. We created Chromatic Racing, an interactive piece, which invites the visitor to create a piece of art by playing the game. The piece is exposed form 2019 to 2024.
The main focus of the studio is currently to develop Hermit.
Hermit: an Underwater Tale is currently the studio's main focus. The game is a fast paced action game with arcade elements. You play as a hermit crab who faces waves of creeps and dangerous sea creatures.
In order to protect yourself and defeat your enemies, you will use a wide variety of empty shells as weapons. Each shell comes with a specific set of moves and attacks and will eventually break after use.
Without any shell, the hermit is fully exposed and any damage he receives will be lethal. The game features a simple core mechanic paired with a high and ever increasing difficulty over multiple levels.
In order to beat the game, you will have to combine skill, real-time resources management, smart item purchases, upgrade strategies and serious stamina in order to survive.
Feel free to check it out and let us know what you think!