Open this publication in new window or tab >>2024 (English)In: EAI Endorsed Transactions on Industrial Networks and Intelligent Systems, E-ISSN 2410-0218, Vol. 11, no 1, p. 1-22Article in journal (Refereed) Published
Abstract [en]
Immersive media such as virtual reality, augmented reality, and 360° video have seen tremendous technological developments in recent years. Furthermore, the advances in head-mounted displays (HMDs) offer the users increased immersive experiences compared to conventional displays. To develop novel immersive media systems and services that satisfy the expectations of the users, it is essential to conduct subjective tests revealing users’ perceived quality of immersive media. However, due to the new viewing dimensions provided by HMDs and the potential of interacting with the content, a wide range of subjective tests are required to understand the many aspects of user behavior in and quality perception of immersive media. The ground truth obtained by such subjective tests enable the development of optimized immersive media systems that fulfill the expectations of the users. This article focuses on the consistency of 360° video quality assessment to reveal whether users’ subjective quality assessment of such immersive visual stimuli changes fundamentally over time or is kept consistent for each user. A pilot study was conducted under pandemic conditions with participants given the task of rating the quality of 360° video stimuli on an HMD in standing and seated viewing. The choice of conducting a pilot study is motivated by the fact that immersive media impose high cognitive load on the participants and the need to keep the number of participants under pandemic conditions as low as possible. To gain insight into the consistency of the participants’ 360° video assessment over time, three sessions were held for each participant and each viewing condition with long and short breaks between sessions. In particular, the opinion scores and head movements were recorded for each participant and each session in standing and seated viewing. The statistical analysis of this data leads to the conjecture that the quality rating stays consistent throughout these sessions with each participant having their own quality assessment signature. The head movements, indicating the participants’ scene exploration during the quality assessment task, also remain consistent for each participant according their individual narrower or wider scene exploration signature. These findings are more pronounced for standing viewing than for seated viewing. This work supports the role of pilot studies being a useful approach of conducting pre-tests on immersive media quality under opportunity-limited conditions and for the planning of subsequent full subjective tests with a large panel of participants. The annotated RQA360 dataset containing the data recorded in the repeated subjective tests is made publicly available to the research community. © 2024. M. Elwardy et al., licensed to EAI. This is an open access article distributed under the terms of the CC BY-NC-SA 4.0, which permits copying, redistributing, remixing, transformation, and building upon the material in any medium so long as the original work is properly cited. doi:10.4108/eetinis.v11i1.4323. All Rights Reserved.
Place, publisher, year, edition, pages
European Alliance for Innovation, 2024
Keywords
360° video, annotated dataset, opportunity-limited conditions, pilot study, quality assessment, quality of experience, seated viewing, standing viewing, subjective tests, Behavioral research, Helmet mounted displays, Quality control, Quality of service, Subjective testing, Virtual reality, 360° video, Annotated datasets, Condition, Opportunity-limited condition, Pilot studies, Subjective test, Augmented reality
National Category
Telecommunications
Identifiers
urn:nbn:se:bth-26064 (URN)10.4108/eetinis.v11i1.4323 (DOI)2-s2.0-85187234575 (Scopus ID)
Funder
Knowledge Foundation, 20170056Knowledge Foundation, 20220068
2024-03-272024-03-272024-11-20Bibliographically approved