The aim of this thesis is to study and advance technology relating to remote
rendering of Virtual Reality (VR). In remote rendering, rendered content is
commonly streamed as video images in network packets from a server to a
client. Experiments are conducted with varying networks and configurations
throughout this work as well as with different technologies that enable or improve
remote VR experiences.
As an introduction to the field, the thesis begins with related studies on
360-video. Here, a statistic based on throughput alone is proposed for use in
light-weight performance monitoring of encrypted HTTPS 360-video streams.
The statistic gives an indication of the potential of stalls in the video stream
which may be of use for network providers wanting to allocate bandwidth
optimally. Moving on from 360-video into real-time remote rendering, a wireless
VR adapter, TPCAST, is studied and a method for monitoring the inputand
video-throughput of this device is proposed and implemented. With the
monitoring tool, it is for example possible to identify video stalls that occur
in TPCAST and thus determine a baseline of its robustness in terms of video
delivery. Having determined the baseline, we move on to developing a prototype
remote rendering system for VR. The prototype has so far been used to study
the bitrate requirements of remote VR and to develop a novel method that
can be used to reduce the image size from a codec-perspective by utilizing the
Hidden Area Mesh (HAM) that is unique to VR. By reducing the image size,
codecs can run faster and time will therefore be saved each frame, potentially
reducing the latency of the system.