Selected: JSR, 2024
Bleached coral reefs, a result of environmental stress, signal a concerning decline in marine ecosystem health. Coral
bleaching is a deadly process which reduces coral populations, causing world-wide environmental issues such as the
loss of habitats for wildlife. These detrimental after-effects can be alleviated if the health status of corals are detected
early, and when bleaching initially begins. Existing coral bleaching detectors mostly rely on manual imaging and
classifications, which are time-consuming and susceptible to human error. Therefore, deep learning techniques were
employed to extract patterns and discern the health status of coral reefs from underwater images. We hypothesize
that the accuracy and effectiveness of deep learning models in identifying coral bleaching events from underwater
imagery are influenced by the underlying architectural design, with models leveraging deeper networks like VGG16
outperforming lighter models such as MobileNetV2 and ResNet50 in terms of recall and overall accuracy. A dataset
with diverse underwater images of coral reefs was compiled. This consisted of 923 of total images, with the
distribution as follows: 485 (53%) of images were bleached while 438 (47%) were healthy. We further evaluated the
efficacy of different convolutional neural network models, including popular architectures like MobileNetV2,
ResNet50, and VGG16. Through several experiments, VGG16 was found to be the most effective in accurately
classifying coral health status, achieving the accuracy of 89.02, the highest among the tested models