Short Introduction
The demonstrator shows how volumetric videos can be streamed efficiently over 5G networks. Through the development of a volumetric video streaming system it is possible to offload the rendering to a powerful cloud/edge server and only send the rendered 2D view to the client instead of the full volumetric content
Description
This demonstrator was developed in cooperation of the Multimedia Communications group of Fraunhofer Heinrich-Hertz-Institutes (HHI) department Video Communication and Applications, HHI’s Vision and Imaging Technologies department, Telekom and Volucap GmbH. The goal was to find a more efficient way to render volumetric videos, which are immersive media content produced by capturing an object using multiple, synchronized cameras in a three-dimensional space, the object can later be viewed from different angles and distances. Usually volumetric videos are stored as point clouds or meshes. This requires storage of geometric information in addition to texture, which results in a huge amount of data, because of that efficient compression is essential. HHI’s Multimedia Communications group is developing tools for compression, packaging and multiplexing of mesh-based volumetric videos.
The demonstrator shows a way in which a cloud based rendering system is used to decode the volumetric video content and transmit it to the user’s device. This is very effective, because the hardware in current mobile devices is inadequate for this task. By that it is also possible to display complex scences on legacy devices.
In the demonstration, the process in the cloud is visualized on a screen. The user can choose between three people – Josh, Sarah and Franziska, who were previously recorded in a volumetric studio. A tablet serves as the user device, where the stream can be received from the cloud via an app. The selected person appears on the tablet’s display and is inserted into the surrounding space. It is now possible for the user to move around with the tablet and for the person to maintain eye contact in the volumetric video. The rendering to maintain eye contact is done in real time by sending the position data to the cloud, where the stream of the volumetric video is adjusted.
Use-Case
Volumetric Streaming aims to make it possible to interact with volumetric videos and make VR experiences more vivid, while at the same time reducing the data load on user devices. Volumetric Streaming is working with the VoluProf project, which is exploring new possibilities for digital teaching. Cloud rendering technology would make it possible to create immersive teaching experiences for students. A sample lesson can also be tried out on the demonstrator.