Tallam et al. 2023. Application of Deep Learning for Classification of Intertidal Eelgrass from Drone-Acquired Imagery

Tallam, Krti, Nam Nguyen, Jonathan Ventura, Andrew Fricker, Sadie Calhoun, Jennifer O’Leary, Mauriça Fitzgibbons, Ian Robbins, and Ryan K. Walter. 2023. “Application of Deep Learning for Classification of Intertidal Eelgrass from Drone-Acquired Imagery.” Remote Sensing 15(9):2321. doi: 10.3390/rs15092321.

Abstract
Shallow estuarine habitats are globally undergoing rapid changes due to climate change and anthropogenic influences, resulting in spatiotemporal shifts in distribution and habitat extent. Yet, scientists and managers do not always have rapidly available data to track habitat changes in real-time. In this study, we apply a novel and a state-of-the-art image segmentation machine learning technique (DeepLab) to two years of high-resolution drone-based imagery of a marine flowering plant species (eelgrass, a temperate seagrass). We apply the model to eelgrass (Zostera marina) meadows in the Morro Bay estuary, California, an estuary that has undergone large eelgrass declines and the subsequent recovery of seagrass meadows in the last decade. The model accurately classified eelgrass across a range of conditions and sizes from meadow-scale to small-scale patches that are less than a meter in size. The model recall, precision, and F1 scores were 0.954, 0.723, and 0.809, respectively, when using human-annotated training data and random assessment points. All our accuracy values were comparable to or demonstrated greater accuracy than other models for similar seagrass systems. This study demonstrates the potential for advanced image segmentation machine learning methods to accurately support the active monitoring and analysis of seagrass dynamics from drone-based images, a framework likely applicable to similar marine ecosystems globally, and one that can provide quantitative and accurate data for long-term management strategies that seek to protect these vital ecosystems.


Figure 5. Model success cases. Human eelgrass annotations compared to machine-annotated classifications in the three areas of the estuary shown in Figure 1. Column (a ,b ) show the same area of the estuary with human and model annotations, respectively, while column (c ) shows the two annotations overlapped and zoomed in to see differences in annotations more clearly. (1 ) The machine annotation can capture larger beds and accurately capture the perimeter on par with human annotations. (2 ) Smaller, patchy beds that the machine annotations more precisely outline beds than the human annotations. (3 ) The model can pick up on smaller beds that are missed or deemed too small to annotate in human annotations.