In response to the issues of poor real-time performance and low detection accuracy when deploying deep learning models for aero-engine damage detection on embedded devices
this paper introduces the FDG-YOLO lightweight model for aviation engine damage detection. Firstly
FasterNet was introduced to restructure the backbone network of YOLOv5
addressing the issue of large parameter count in the backbone network. Second
depth-wise separable convolutions were used to eliminate superfluous parameters in the neck network of YOLOv5 by improving ordinary convolutions. In order to improve the model's expressive power and receptive field
the original C3 structure was replaced with the GS C3 structure
which was built concurrently based on GSConv. Finally
experiments were conducted and validated on an aviation engine damage dataset. In the end
experiments were conducted and validated on an aero-engine damage dataset. The findings show that the FDG-YOLO model reduces the number of parameters by 52.5% and the giga floating-point operations per second by 66% when compared to the original model. On embedded devices
the mean average precision (mAP) reaches 89.6%
surpassing other lightweight models. The frames per second achieves 61
making the detection speed suitable for the engine damage image acquisition rate. It more effectively satisfies aero-engine damage detection's intelligent application criteria.