LiTMNet: A deep CNN for efficient HDR image reconstruction from a single LDR image


Existing methods can generate a high dynamic range (HDR) image from a single low dynamic range (LDR) image using convolutional neural networks (CNNs). However, they are too cumbersome to run on mobile devices with limited computational resources. In this work, we design a lightweight CNN, namely LiTMNet which takes a single LDR image as input and recovers the lost information in its saturated regions to reconstruct an HDR image. To avoid trading off the reconstruction quality for efficiency, LiTMNet does not only adapt a lightweight encoder for efficient feature extraction, but also contains newly designed upsampling blocks in the decoder to alleviate artifacts and further accelerate the reconstruction. The final HDR image is produced by nonlinearly blending the network prediction and the original LDR image. Qualitative and quantitative comparisons demonstrate that LiTMNet produces HDR images of high quality comparable with the current state of the art and is 38× faster as tested on a mobile device. Please refer to the supplementary video for additional visual results.

Code available soon


Wu, Guotao et al. LiTMNet: A deep CNN for efficient HDR image reconstruction from a single LDR image. Pattern Recognition (2022): 108620.

Framework of LiTMNet scheme

Result & Comparison:

Comparison results :

Average PSNR, MS-SSIM and HDR-VDP-2.2 scores on HDRCNN , FHDR and SingleHDR datasets where the top three scores are colored in red, green and blue respectively. ∗ and † correspond to the run time on a desktop computer and a mobile phone respectively. A perceptual uniformity encoding is applied to the prediction and the reference for the PSNR and MS-SSIM metrics.The five traditional algorithm implementations of Akyüz, Huo, Kovaleski, Kuo and Masia are from HDR Toolbox( ), which is part of the book Advanced High Dynamic Range Imaging (

Back to Home