Vol. 171
Latest Volume
All Volumes
PIER 179 [2024] PIER 178 [2023] PIER 177 [2023] PIER 176 [2023] PIER 175 [2022] PIER 174 [2022] PIER 173 [2022] PIER 172 [2021] PIER 171 [2021] PIER 170 [2021] PIER 169 [2020] PIER 168 [2020] PIER 167 [2020] PIER 166 [2019] PIER 165 [2019] PIER 164 [2019] PIER 163 [2018] PIER 162 [2018] PIER 161 [2018] PIER 160 [2017] PIER 159 [2017] PIER 158 [2017] PIER 157 [2016] PIER 156 [2016] PIER 155 [2016] PIER 154 [2015] PIER 153 [2015] PIER 152 [2015] PIER 151 [2015] PIER 150 [2015] PIER 149 [2014] PIER 148 [2014] PIER 147 [2014] PIER 146 [2014] PIER 145 [2014] PIER 144 [2014] PIER 143 [2013] PIER 142 [2013] PIER 141 [2013] PIER 140 [2013] PIER 139 [2013] PIER 138 [2013] PIER 137 [2013] PIER 136 [2013] PIER 135 [2013] PIER 134 [2013] PIER 133 [2013] PIER 132 [2012] PIER 131 [2012] PIER 130 [2012] PIER 129 [2012] PIER 128 [2012] PIER 127 [2012] PIER 126 [2012] PIER 125 [2012] PIER 124 [2012] PIER 123 [2012] PIER 122 [2012] PIER 121 [2011] PIER 120 [2011] PIER 119 [2011] PIER 118 [2011] PIER 117 [2011] PIER 116 [2011] PIER 115 [2011] PIER 114 [2011] PIER 113 [2011] PIER 112 [2011] PIER 111 [2011] PIER 110 [2010] PIER 109 [2010] PIER 108 [2010] PIER 107 [2010] PIER 106 [2010] PIER 105 [2010] PIER 104 [2010] PIER 103 [2010] PIER 102 [2010] PIER 101 [2010] PIER 100 [2010] PIER 99 [2009] PIER 98 [2009] PIER 97 [2009] PIER 96 [2009] PIER 95 [2009] PIER 94 [2009] PIER 93 [2009] PIER 92 [2009] PIER 91 [2009] PIER 90 [2009] PIER 89 [2009] PIER 88 [2008] PIER 87 [2008] PIER 86 [2008] PIER 85 [2008] PIER 84 [2008] PIER 83 [2008] PIER 82 [2008] PIER 81 [2008] PIER 80 [2008] PIER 79 [2008] PIER 78 [2008] PIER 77 [2007] PIER 76 [2007] PIER 75 [2007] PIER 74 [2007] PIER 73 [2007] PIER 72 [2007] PIER 71 [2007] PIER 70 [2007] PIER 69 [2007] PIER 68 [2007] PIER 67 [2007] PIER 66 [2006] PIER 65 [2006] PIER 64 [2006] PIER 63 [2006] PIER 62 [2006] PIER 61 [2006] PIER 60 [2006] PIER 59 [2006] PIER 58 [2006] PIER 57 [2006] PIER 56 [2006] PIER 55 [2005] PIER 54 [2005] PIER 53 [2005] PIER 52 [2005] PIER 51 [2005] PIER 50 [2005] PIER 49 [2004] PIER 48 [2004] PIER 47 [2004] PIER 46 [2004] PIER 45 [2004] PIER 44 [2004] PIER 43 [2003] PIER 42 [2003] PIER 41 [2003] PIER 40 [2003] PIER 39 [2003] PIER 38 [2002] PIER 37 [2002] PIER 36 [2002] PIER 35 [2002] PIER 34 [2001] PIER 33 [2001] PIER 32 [2001] PIER 31 [2001] PIER 30 [2001] PIER 29 [2000] PIER 28 [2000] PIER 27 [2000] PIER 26 [2000] PIER 25 [2000] PIER 24 [1999] PIER 23 [1999] PIER 22 [1999] PIER 21 [1999] PIER 20 [1998] PIER 19 [1998] PIER 18 [1998] PIER 17 [1997] PIER 16 [1997] PIER 15 [1997] PIER 14 [1996] PIER 13 [1996] PIER 12 [1996] PIER 11 [1995] PIER 10 [1995] PIER 09 [1994] PIER 08 [1994] PIER 07 [1993] PIER 06 [1992] PIER 05 [1991] PIER 04 [1991] PIER 03 [1990] PIER 02 [1990] PIER 01 [1989]
2021-12-16
Deep Neural Networks for Image Super-Resolution in Optical Microscopy by Using Modified Hybrid Task Cascade U-Net
By
Progress In Electromagnetics Research, Vol. 171, 185-199, 2021
Abstract
Due to the optical diffraction limit, the resolution of a wide-field (WF) microscope cannot easily go below a few hundred nanometers. Super-resolution microscopy has the disadvantages of high cost, complex optical equipment, and high experimental environment requirements. Deep-learning-based super-resolution (DLSR) has the advantages of simple operation and low cost, and has attracted much attention recently. Here we propose a novel DLSR model named Modified Hybrid Task Cascade U-Net (MHTCUN) for image super-resolution in optical microscopy using the public biological image dataset BioSR. The MHTCUN has three stages, and we introduce a novel module named Feature Refinement Module (FRM) to extract deeper features in each stage. In each FRM, a U-Net is introduced to refine the features, and the Fourier Channel Attention Block (FCAB) is introduced in the U-Net to learn the high-level representation of the high-frequency information of different feature maps. Compared with six state-of-the-art DLSR models used for single-image super-resolution (SISR), our MHTCUN achieves the highest signal-to-noise ratio (PSNR) of 26.87 and structural similarity (SSIM) of 0.746, demonstrating that our MHTCUN has achieved the state-of-the-art in DLSR. Compared with the DLSR model DFCAN used for image super-resolution in optical microscopy specially, MHTCUN has a significant improvement in PSNR and a slight improvement in SSIM on BioSR. Finally, we fine-tune the trained MHTCUN on the other biological images. MHTCUN also shows good performance on denoising, contrast enhancement, and resolution enhancement.
Citation
Dawei Gong, Tengfei Ma, Julian Evans, and Sailing He, "Deep Neural Networks for Image Super-Resolution in Optical Microscopy by Using Modified Hybrid Task Cascade U-Net," Progress In Electromagnetics Research, Vol. 171, 185-199, 2021.
doi:10.2528/PIER21110904
References

1. Zhao, Z.-Q., P. Zheng, S.-T. Xu, and X. Wu, "Object detection with deep learning: A review," IEEE Transactions on Neural Networks and Learning Systems, Vol. 30, No. 11, 3212-3232, 2019.
doi:10.1109/TNNLS.2018.2876865

2. Trajanovski, S., C. Shan, P. J. C. Weijtmans, S. G. B. de Koning, and T. J. M. Ruers, "Tongue tumor detection in hyperspectral images using deep learning semantic segmentation," IEEE Transactions on Biomedical Engineering, Vol. 68, No. 4, 1330-1340, 2020.
doi:10.1109/TBME.2020.3026683

3. Zhao, S., D. M. Zhang, and H. W. Huang, "Deep learning-based image instance segmentation for moisture marks of shield tunnel lining," Tunnelling and Underground Space Technology, Vol. 95, 103156, 2020.
doi:10.1016/j.tust.2019.103156

4. Yang, W., X. Zhang, Y. Tian, W. Wang, J.-H. Xue, and Q. Liao, "Deep learning for single image super-resolution: A brief review," IEEE Transactions on Multimedia, Vol. 21, No. 12, 3106-3121, 2019.
doi:10.1109/TMM.2019.2919431

5. Ledig, C., L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z.Wang, et al. "Photo-realistic single image super-resolution using a generative adversarial network," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4681-4690, 2017.

6. Kim, K. I. and Y. Kwon, "Single-image super-resolution using sparse regression and natural image prior," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 32, No. 6, 1127-1133, 2010.
doi:10.1109/TPAMI.2010.25

7. Kirkland, E. J., "Bilinear interpolation," Advanced Computing in Electron Microscopy, 261-263, Springer, 2010.
doi:10.1007/978-1-4419-6533-2_12

8. Liu, T., K. De Haan, Y. Rivenson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, "Deep learning-based super-resolution in coherent imaging systems," Scienti c Reports, Vol. 9, No. 1, 1-13, 2019.

9. Dong, C., C. C. Loy, K. He, and X. Tang, "Image super-resolution using deep convolutional networks," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 38, No. 2, 295-307, 2015.
doi:10.1109/TPAMI.2015.2439281

10. He, K., X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770-778, 2016.

11. Lim, B., S. Son, H. Kim, S. Nah, and K. M. Lee, "Enhanced deep residual networks for single image super-resolution," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 136-144, 2017.

12. Zhang, Y., K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, "Image superresolution using very deep residual channel attention networks," Proceedings of the European Conference on Computer Vision (ECCV), 286-301, 2018.

13. Huang, G., Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4700-4708, 2017.

14. Zhang, Y., Y. Tian, Y. Kong, B. Zhong, and Y. Fu, "Residual dense network for image super-resolution," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2472-2481, 2018.

15. Li, Z., J. Yang, Z. Liu, X. Yang, G. Jeon, and W. Wu, "Feedback network for image super-resolution," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3867-3876, 2019.

16. Ronneberger, O., P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," International Conference on Medical Image Computing and Computer-assisted Intervention, 234-241, Springer, 2015.

17. Chen, K., J. Pang, J. Wang, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu, J. Shi, W. Ouyang, et al. "Hybrid task cascade for instance segmentation," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4974-4983, 2019.

18. Cai, Z. and N. Vasconcelos, "Cascade r-cnn: Delving into high quality object detection," Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 6154-6162, 2018.

19. Hell, S. W. and J. Wichmann, "Breaking the diffraction resolution limit by stimulated emission: Stimulated-emission-depletion uorescence microscopy," Optics Letters, Vol. 19, No. 11, 780-782, 1994.

20. Hess, S. T., T. P. K. Girirajan, and M. D. Mason, "Ultra-high resolution imaging by fluorescence photoactivation localization microscopy," Biophysical Journal, Vol. 91, No. 11, 4258-4272, 2006.

21. Rust, M. J., M. Bates, and X. Zhuang, "Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (storm)," Nature Methods, Vol. 3, No. 10, 793-796, 2006.

22. Gustafsson, M. G. L., "Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy," Journal of Microscopy, Vol. 198, No. 2, 82-87, 2000.

23. Weigert, M., U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, et al. "Content-aware image restoration: Pushing the limits of fluorescence microscopy," Nature Methods, Vol. 15, No. 12, 1090-1097, 2018.

24. Wang, H., Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, "Deep learning enables cross-modality superresolution in fluorescence microscopy," Nature Methods, Vol. 16, No. 1, 103-110, 2019.

25. Qiao, C., D. Li, Y. Guo, C. Liu, T. Jiang, Q. Dai, and D. Li, "Evaluation and development of deep neural networks for image super-resolution in optical microscopy," Nature Methods, Vol. 18, No. 2, 194-202, 2021.

26. Shi, W., J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, "Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1874-1883, 2016.

27. Howard, A. G., M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, "Mobilenets: Efficient convolutional neural networks for mobile vision applications,", arXiv preprint arXiv:1704.04861, 2017.

28. Ramachandran, P., B. Zoph, and Q. V. Le, "Searching for activation functions,", arXiv preprint arXiv:1710.05941, 2017.

29. Lin, T.-Y., P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, "Feature pyramid networks for object detection," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2117-2125, 2017.

30. Allen, D. M., "Mean square error of prediction as a criterion for selecting variables," Technometrics, Vol. 13, No. 3, 469-475, 1971.

31. Wang, Z., A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: From error visibility to structural similarity," IEEE Transactions on Image Processing, Vol. 13, No. 4, 600-612, 2004.

32. Szegedy, C., W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, "Going deeper with convolutions," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1-9, 2015.

33. Descloux, A., K. S. Gruβmayer, and A. Radenovic, "Parameter-free image resolution estimation based on decorrelation analysis," Nature Methods, Vol. 16, No. 9, 918-924, 2019.

34. Abramoff, M. D., P. J. Magalhaes, and S. J. Ram, "Image processing with imagej," Biophotonics International, Vol. 11, No. 7, 36-42, 2004.