Enhancing Multimodal Medical Image Fusion Using a Markov Discriminator in Generative Adversarial Networks
DOI:
https://doi.org/10.54691/zs78vp41Keywords:
Adversarial generative networks, Multimodal images, Markov discriminator.Abstract
Multimodal medical images, comprising anatomical and functional images, offer complementary insights into organ structure and metabolism. Anatomical images depict internal organ structures, whereas functional images illustrate metabolic activity but lack detailed structural information. Multimodal image fusion integrates data from different sensors to create images enriched with diverse semantic content, overcoming the limitations of single-modality imaging. Current fusion methods based on generative adversarial networks (GANs) use discriminators that convolve the entire input image, which can reduce efficiency and result in detail loss. To address this, we propose a GAN framework with a Markov discriminator that leverages local (Markov) properties. By redesigning the discriminator and formulating the loss function based on Markov correlation principles, our method focuses on local areas, thereby enhancing network performance and preserving finer details in the fusion images.Experimental results demonstrate that our approach produces fusion images with significantly improved detail retention and superior performance compared to conventional methods.
Downloads
References
[1] J. Wang, L. Yu, S. Tian, W. Wu, and D. Zhang, "AMFNet: An attention-guided generative adversarial network for multi-model image fusion," Biomedical Signal Processing and Control, vol. 78, p. 103990, 2022/09/01/ 2022, doi: https://doi.org/10.1016/j.bspc.2022.103990.
[2] Y. Fu, X.-J. Wu, and T. Durrani, "Image fusion based on generative adversarial network consistent with perception," Information Fusion, vol. 72, pp. 110-125, 2021/08/01/ 2021, doi: https://doi.org/10.1016/j.inffus.2021.02.019.
[3] T. Zhou, Q. Cheng, H. Lu, Q. Li, X. Zhang, and S. Qiu, "Deep learning methods for medical image fusion: A review," Computers in Biology and Medicine, vol. 160, p. 106959, 2023/06/01/ 2023, doi: https://doi.org/10.1016/j.compbiomed.2023.106959.
[4] S. Li, X. Kang, and J. Hu, "Image Fusion With Guided Filtering," IEEE Transactions on Image Processing, vol. 22, no. 7, pp. 2864-2875, 2013, doi: 10.1109/TIP.2013.2244222.
[5] X. Yang, H. Huo, J. Li, C. Li, Z. Liu, and X. Chen, "DSG-Fusion: Infrared and visible image fusion via generative adversarial networks and guided filter," Expert Syst. Appl., vol. 200, no. C, p. 17, 2022, doi: 10.1016/j.eswa.2022.116905.
[6] L. Kou, L. Zhang, K. Zhang, J. Sun, Q. Han, and Z. Jin, "A multi-focus image fusion method via region mosaicking on Laplacian pyramids," PLOS ONE, vol. 13, no. 5, p. e0191085, 2018, doi: 10.1371/journal.pone.0191085.
[7] H. Li, B. S. Manjunath, and S. K. Mitra, "Multisensor Image Fusion Using the Wavelet Transform," Graphical Models and Image Processing, vol. 57, no. 3, pp. 235-245, 1995/05/01/ 1995, doi: https://doi.org/10.1006/gmip.1995.1022.
[8] Y. Liu, S. Liu, and Z. Wang, "A general framework for image fusion based on multi-scale transform and sparse representation," Information Fusion, vol. 24, pp. 147-164, 2015/07/01/ 2015, doi: https://doi.org/10.1016/j.inffus.2014.09.004.
[9] Y. Liu, X. Chen, R. K. Ward, and Z. J. Wang, "Image Fusion With Convolutional Sparse Representation," IEEE Signal Processing Letters, vol. 23, no. 12, pp. 1882-1886, 2016, doi: 10.1109/LSP.2016.2618776.
[10] J. Gu et al., "Recent advances in convolutional neural networks," Pattern Recognition, vol. 77, pp. 354-377, 2018/05/01/ 2018, doi: https://doi.org/10.1016/j.patcog.2017.10.013.
[11] I. Goodfellow et al., "Generative adversarial networks," Commun. ACM, vol. 63, no. 11, pp. 139–144, 2020, doi: 10.1145/3422622.
[12] A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath, "Generative Adversarial Networks: An Overview," IEEE Signal Processing Magazine, vol. 35, no. 1, pp. 53-65, 2018, doi: 10.1109/MSP.2017.2765202.
[13] -. K. Wang, -. C. Gou, -. Y. Duan, -. Y. Lin, -. X. Zheng, and -. F.-Y. Wang, "- Generative Adversarial Networks:Introduction and Outlook," - IEEE/CAA Journal of Automatica Sinica, vol. - 4, no. - 4, pp. - 588, - 2017, doi: - 10.1109/jas.2017.7510583.
[14] J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, "FusionGAN: A generative adversarial network for infrared and visible image fusion," Information Fusion, vol. 48, pp. 11-26, 2019/08/01/ 2019, doi: https://doi.org/10.1016/j.inffus.2018.09.004.
[15] Y. Wang, S. Xu, J. Liu, Z. Zhao, C. Zhang, and J. Zhang, "MFIF-GAN: A new generative adversarial network for multi-focus image fusion," Signal Processing: Image Communication, vol. 96, p. 116295, 2021/08/01/ 2021, doi: https://doi.org/10.1016/j.image.2021.116295.
[16] J. Ma, H. Xu, J. Jiang, X. Mei, and X. P. Zhang, "DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion," IEEE Transactions on Image Processing, vol. 29, pp. 4980-4995, 2020, doi: 10.1109/TIP.2020.2977573.
[17] H. Zhou, J. Hou, Y. Zhang, J. Ma, and H. Ling, "Unified gradient- and intensity-discriminator generative adversarial network for image fusion," Inf. Fusion, vol. 88, no. C, pp. 184–201, 2022, doi: 10.1016/j.inffus.2022.07.016.
[18] Q. Li et al., "Coupled GAN With Relativistic Discriminators for Infrared and Visible Images Fusion," IEEE Sensors Journal, vol. 0, p. 0, 06/10 2019, doi: 10.1109/JSEN.2019.2921803.
[19] J. Huang, Z. Le, Y. Ma, F. Fan, H. Zhang, and L. Yang, "MGMDcGAN: Medical Image Fusion Using Multi-Generator Multi-Discriminator Conditional Generative Adversarial Network," IEEE Access, vol. PP, pp. 1-1, 03/19 2020, doi: 10.1109/ACCESS.2020.2982016.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Frontiers in Science and Engineering

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.