Page 93 - ITU Journal, ICT Discoveries, Volume 3, No. 1, June 2020 Special issue: The future of video and immersive media
P. 93
ITU Journal: ICT Discoveries, Vol. 3(1), June 2020
of compressed video sequences employed in this study. [8] N. Ponomarenko, O. Eremeev, V. Lukin, K. Egiazarian,
Moreover, as noted in Sec.1.1, the per-block sensitivity and M. Carli, “Modified image visual quality metrics
weight of (5) or of (9) can be used to easily adapt for contrast change and mean shift accounting,” in
the quantization parameter (QP) in traditional coders Proc. CADSM, Polyana-Svalyava, pp. 305–311, 2011.
to the instantaneous input characteristics, without ha-
[9] P. Gupta, P. Srivastava, S. Bhardwaj, and V. Bhateja, “A
ving to rely on [13]–[15],[17]–[19],[39]. Specifically,
modified PSNR metric based on HVS for quality
assessment of color images,” in Proc. IEEE Int. Conf.
QP = QP − round 3 ∙ log (17) on Commun. & Industr. Applic., Kolkata, IN, Dec. 2011.
[10] P. Philippe, W. Hamidouche, J. Fournier, and J. Y.
can be used to XPSNR-optimize the quantization step-
Aubié, “AHG4: Subjective comparison of VVC and
size inside an HEVC or VVC encoder, initialized using a HEVC,” Joint Video Experts Team, doc. JVET-O0451,
per-frame constant QP , on a coding block basis. Aside
Gothenburg, SE, July 2019.
from further statistical evaluation using more datasets,
this beneficial aspect, along with the incorporation of [11] N. Sidaty, W. Hamidouche, P. Philippe, J. Fournier, and
chroma-component and/or high dynamic range (HDR) O. Deforges, “Compression Performance of the Ver-
statistics (see, e. g.,[43]) and multi-threaded operation, satile Video Coding: HD and UHD Visual Quality Mo-
will be the focus of future work on this VQA algorithm. nitoring,” in Proc. IEEE Picture Coding Symposium,
Ningbo, CN, Nov. 2019.
[12] Z. Li, “VMAF: The Journey Continues,” in Proc. Mile
9. ACKNOWLEDGMENT
High Video Workshop, Denver, 2019, link: http://mil
The authors thank Pierrick Philippe (formerly B-Com) ehigh.video/files/mhv2019/pdf/day1/1_08_Li.pdf.
for helping to calculate the VQA values on the VTM and
[13] S. Bosse, C. R. Helmrich, H. Schwarz, D. Marpe, and T.
HM coded videos of the comparative test published in Wiegand, “Perceptually optimized QP adaptation
[10],[11] and Sören Becker for assistance in the collec- and associated distortion measure,” doc. JVET-
tion of the correlation values and the creation of Fig. 4.
H0047, Macau, CN, Oct./Dec. 2017.
[14] C. R. Helmrich, H. Schwarz, D. Marpe, and T. Wie-
REFERENCES gand, “AHG10: Improved perceptually optimized QP
adaptation and associated distortion measure,” doc.
[1] Y. Chen, K. Wu, and Q. Zhang, “From QoS to QoE: A
Tutorial on Video Quality Assessment,” IEEE Comm. JVET-K0206, Ljubljana, SI, July 2018.
Surveys & Tutor., vol.17, no. 2, pp.1126–1165, 2015. [15] C. R. Helmrich, H. Schwarz, D. Marpe, and T. Wie-
gand, “AHG10: Clean-up and finalization of percep-
[2] B.Girod,“What’s Wrong With Mean-squared Error?”
Digital Images and Human Vision, A. B. Watson Ed., tually optimized QP adaptation method in VTM,”
doc. JVET-M0091, Marrakech, MA, Dec. 2018.
Cambridge, MA, US: MIT Press, pp. 207–220, 1993.
[16] S. Bosse, S. Becker, K.-R. Müller,W. Samek, and T. Wie-
[3] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncel-
gand, “Estimation of distortion sensitivity for visual
li, “Image Quality Assessment: From Error Visibility
to Structural Similarity,” IEEE Trans. Image Process., quality prediction using a convolutional neural net-
work,” Digital Sig. Process., vol. 91, pp. 54–65, 2019.
vol.13, no. 4, pp. 600–612, Apr. 2004.
[17] J. Erfurt, C. R. Helmrich, S. Bosse, H. Schwarz, D.
[4] Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale
Structural Similarity for Image Quality assessment,” Marpe, and T. Wiegand, “A Study of the Perceptually
in Proc. IEEE 37 Asilomar Conf. on Signals, Systems, Weighted Peak Signal-to-Noise Ratio (WPSNR) for
th
Image Compression,” in Proc. IEEE Int. Conf. Image
and Computers, Pacific Grove, CA, US, Nov. 2003.
Process. (ICIP), Taipei, pp. 2339–2343, Sep. 2019.
[5] Netflix Inc, “VMAF–Video Multimethod Assessment
Fusion,” 2019, link: https://github.com/Netflix/vmaf, [18] C. R. Helmrich, M. Siekmann, S. Becker, S. Bosse, D.
Marpe, and T. Wiegand, “XPSNR: A Low-Complexity
https://medium.com/netflix-techblog/toward-a-prac
Extension of the Perceptually Weighted Peak Signal-
tical-perceptual-video-quality-metric-653f208b9652.
to-Noise Ratio for High-Resolution Video Quality
[6] K. Egiazarian, J. Astola, N. Ponomarenko, V. Lukin, F. Assessment,” in Proc. IEEE Int. Conf. Acoustics, Speech,
Battisti,and M.Carli,“New full-reference quality me- Sig. Process. (ICASSP), virtual/online, May 2020.
trics based on HVS,” in Proc. 2 Int. Worksh. Vid. Pro-
nd
[19] C. R. Helmrich, S. Bosse, M. Siekmann, H. Schwarz, D.
cess. & Quality Metrics, Scottsdale, AZ, US, Jan. 2006.
Marpe, and T. Wiegand, “Perceptually Optimized Bit
[7] N. Ponomarenko, F. Silvestri, K. Egiazarian, M. Carli, Allocation and Associated Distortion Measure for
J. Astola, and V. Lukin, “On between-coefficient con- Block-Based Image or Video Coding,” in Proc. IEEE
rd
trast masking of DCT basis functions,”in Proc.3 Int. Data Compression Conf. (DCC), Snowbird, UT, US, pp.
Worksh. Vid. Process. & Quality Metrics, US, Jan. 2007. 172–181, Mar. 2019.
© International Telecommunication Union, 2020 71