Learning High-Quality Navigation and Zooming on Omnidirectional Images in Virtual Reality

  • Zidong Cao
    AI Thrust, HKUST(GZ)
       
  • Zhan Wang
    CMA Thrust, HKUST(GZ)
       
  • Yexin Liu
    AI Thrust, HKUST(GZ)
       
  • Yan-Pei Cao
    ARC Lab, Tencent PCG
       
  • Ying Shan
    ARC Lab, Tencent PCG
       
  • Wei Zeng
    CMA Thrust, HKUST(GZ)
    Dept. of CSE, HKUST
  • Lin Wang
    AI Thrust, HKUST(GZ)
    Dept. of CSE, HKUST

Abstract

Omnidirectional images (ODIs) provide users with the freedom to navigate immersive environments as if they were physically present. Nonetheless, this sense of immersion can be greatly compromised by a blur effect that masks details in ODIs, hampering the user's ability to engage with objects of interest, detracting from the immersive experience, and causing discomfort. In this paper, we present a novel system, called \textit{OmniVR}, designed to enhance visual clarity during VR navigation. Our system enables users to effortlessly locate and zoom in to the objects of interest in VR, even if these are rendered unclear due to size or other factors. It captures user commands for navigation and zoom, converting these inputs into parameters for the M\"obius transformation matrix. Leveraging these parameters, the visual fidelity of the ODI is refined using a learning-based algorithm. The resultant ODI is presented within the VR environment, effectively reducing blur and increasing user engagement. To verify the effectiveness of our system, we first evaluate our algorithm with state-of-the-art methods on public datasets, which achieves the best performance. Furthermore, we undertake a comprehensive user study to evaluate viewer experiences across diverse scenarios using several metrics and to gather their qualitative feedback from multiple perspectives. The outcomes of this study reveal that our system significantly enhances user engagement by 1) Improving the viewers' recognition and comprehension of scenarios, 2) Reducing discomfort, such as mental and physical costs, 3) Improving the overall immersive experience, thereby making navigation and zoom more user-friendly.


BibTeX

             
@inproceedings{cao2023omnivr,
  title={Learning High-Quality Navigation and Zooming on Omnidirectional Images in Virtual Reality},
  author={Zidong Cao, Zhan Wang, Yexin Liu, Yan-pei Cao, Ying Shan, Wei Zeng, and Lin Wang},
  booktitle = {Arxiv},
  year={2023}
}