Liu, Luolin; Chen, Mulin; Xu, Mingliang; Li, Xuelong
Long-wave infrared(thermal) images distinguish the target and background according to different thermal radiation. They are insensitive to light conditions, and cannot present details obtained from reflected light. By contrast, the visible images have high spatial resolution and texture details, but they are easily affected by the occlusion and light conditions. Combining the advantages of the two sources may generate a new image with clear targets and high resolution, which satisfy requirements in all-weather and allday/night conditions. Most of the existing methods cannot fully capture the underlying characteristics in the infrared and visible images, and ignore complementary information between the sources. In this paper, we propose an end-to-end model (TSFNet) for infrared and visible image fusion, which is able to handle the sources simultaneously. In addition, it adopts an adaptive weight allocation strategy to capture the informative global features. Experiments on public datasets demonstrate the proposed fusion method achieves state-of-the-art performance, in both global visual quality and quantitative comparison.
The result was published on NEUROCOMPUTING. DOI: 10.1016/j.neucom.2021.05.034
Download: