In order to solve the problem of unclear edges and missing details in infrared and visible light image fusion
a saliency enhanced dual discriminator generation adversarial infrared and visible light image fusion method is proposed. First
infrared and visible light images are broken down using anisotropic diffusion
while visible light images are improved using local adaptation. Then
visual saliency detection is used to visually enhance the decomposed detail layer image and the base layer image. Next
a dense connected DenseNet generator model is designed to perform feature learning on visually enhanced images. Finally
the fusion result is obtained by competing with the dual discriminator game. Experimental results demonstrate that the suggested approach has more precise information and performs better than the comparison algorithm in both subjective and objective assessments when compared to ten fusion techniques in a public dataset. Compared with the FusionGAN algorithm
the proposed method has improved objective evaluation indicators such as information entropy