Skip to main content

Multimedia research and standardization

 

 

 

The latest multimedia technology innovation from Nokia

Our portfolio of innovations continues to grow thanks to our ongoing investment in multimedia R&D and our internationally acclaimed team of experts. The work of our inventors in video research and standardization has been recognized with numerous prestigious awards, including five Technology & Engineering Emmy® Awards. 

Learned video/image compression

Getting the full picture of lossless image codecs

Image codec performance can be efficiently enhanced through domain adaptation – but its adaptation overhead can compromise its gain. This is where an adaptive multi-scale progressive probability model delivers: effective domain adaptation without the significant overhead. See how this technique could reduce the bitstream size of lossless image codecs by up to 4.8%.

Want to enhance your lossless image compression? Read the whitepaper from Honglei Zhang, Francesco Cricri, Nannan Zou, Hamed R. Tavakoli and Miska M. Hannuksela.

New AI frontiers for image compression

For the last 30 years, image and video compression algorithms have been designed by engineers – but changes may be afoot. With artificial intelligence set to step up the game, model overfitting at inference time may be necessary to improve the efficiency for learning-based codecs. Learn why Nokia is exploring the potential for modified neural networks to streamline the compression process.

Discover more from the article by Honglei Zhang, Francesco Cricri, Hamed R. Tavakoli, Maria Santamaria, Yat-Hong Lam, and Miska M. Hannuksela

Versatile Video Coding (VVC)

A new low latency feature for Versatile Video Coding

Everything from video conferencing to computer vision depends on keeping latency low. We have developed Gradual Decoding Refresh (GDR), a new feature that builds on Versatile Video Coding (VVC). Learn how GDR alleviates delay issues related to intra coded pictures – putting them on par with their inter coded counterparts – and maximizes coding efficiency while minimizing leaks. 

Dive deeper into the topic with Limin Wang, Seungwook Hong and Krit Panusopone

A new low latency feature for Versatile Video Coding

Neural network based video post-processing, this time with content adaptation

Decoded video is usually affected by coding artefacts. This can be alleviated by post-processing - for example using neural network based filters - and better filtering can be achieved by adapting the neural network to the video content. However, this comes with a bitrate overhead. In our paper, we show how efficient content adaptation can be performed, with the aid of the MPEG NNR standard for compressing the adaptation signal.
 

Ready to learn more? Read the article by Maria Santamaria, Francesco Cricri, Jani Lainema, Ramin G. Youvalari, Honglei Zhang and Miska M. Hannuksela.

Content-adaptive neural network post-processing filter

Video/image coding for machines

Machine oriented image compression: a content-adaptive approach

An increasing amount of videos and images are watched by computer algorithms instead of humans. Our research considers how image coding can adapt to non-human eyes, with implications for smart cities, factory robotics, security and much more. Discover how an inference-time content-adaptive approach can improve compression efficiency for machine-consumption without modifying codec parameters.

Want to learn more? Read the article by Nam Le, Honglei Zhang, Francesco Cricri, Ramin Ghaznavi-Youvalari, Hamed R. Tavakoli, Esa Rahtu

Machines are watching

Visual volumetric coding

Breaking the barriers of immersive content with volumetric video

Virtual, augmented and mixed reality applications are on the rise, and volumetric video is the fundamental technology enabling the exploration of real-world captured immersive content. Learn how the family of Visual Volumetric Video-based Coding (V3C) standards efficiently code, store and transport volumetric video content with 6 degrees of freedom. 

Curious to know more? Read the article by Lauri Ilola, Lukasz Kondrad, Sebastian Schwarz and Ahmed Hamza

 

Real-time decoding goes mobile with point cloud compression

From education to entertainment, capturing the real world in multi-dimensional immersive experiences presents a multitude of opportunities – alongside data-heavy complications. The release of the MPEG standard for video-based point cloud compression (V-PCC) for mobile is an immersive media gamechanger. Discover how V-PCC distribution and storage, and real-time decoding can now be achieved on every single media device on the market. 

Find out more in this article by Sebastian Schwarz and Mika Pesonen

Real-time decoding

Navigating realities in 3-Dimensions with Point Cloud Compression

Point clouds are integral to immersive digital representations, enabling quick 3D assessments for navigating autonomous vehicles, robotic sensing and other use cases. This level of innovation requires massive amounts of data – and that’s where Point Cloud Compression (PCC) comes in. See how PCC lightens point cloud transmission for current and next-generation networks.

Discover more in the article by Sebastian Schwarz, Marius Preda, Vittorio Baroncini, Madhukar Budagavi, Pablo Cesar, Philip A. Chou, Robert A. Cohen, Maja Krivokuća, Sébastien Lasserre, Zhu Li, Joan Llach, Khaled Mammou, Rufael Mekuria, Ohji Nakagami, Ernestasia Siahaan, Ali Tabatai, Alexis M. Tourapis, and Vladyslav Zakharchenko.
 

Emerging MPEG standards for Point Cloud Compression