Virtual Reality and Light Field Immersive Video Technologies for Real-World Applications.

Saved in:
Bibliographic Details
Main Author: Lafruit, Gauthier
Corporate Author: ProQuest (Firm)
Other Authors: Teratani, Mehrdad
Format: Electronic eBook
Language:English
Published: London, United Kingdom : Institution of Engineering & Technology, 2022.
Series:Computing and Networks
Subjects:
Online Access:Connect to this title online (unlimited simultaneous users allowed; 365 uses per year)
Table of Contents:
  • Machine generated contents note: 1. Immersive video introduction
  • References
  • 2. Virtual reality
  • 2.1. Introduction/history
  • 2.2. challenge of three to six degrees of freedom
  • 2.3. challenge of stereoscopic to holographic vision
  • References
  • 3. 3D gaming and VR
  • 3.1. OpenGLinVR
  • 3.2. 3D data representations
  • 3.2.1. Triangular meshes
  • 3.2.2. Subdivision surfaces and Bezier curves
  • 3.2.3. Textures and cubemaps
  • 3.3. OpenGL pipeline
  • References
  • 4. Camera and projection models
  • 4.1. Mathematical preliminaries
  • 4.2. pinhole camera model
  • 4.3. Intrinsics of the pinhole camera
  • 4.4. Projection matrices
  • 4.4.1. Mathematical derivation of projection matrices
  • 4.4.2. Characteristics of the projection matrices
  • References
  • 5. Light equations
  • 5.1. Light contributions
  • 5.1.1. Emissive light source
  • 5.1.2. Ambient light
  • 5.1.3. Diffuse light
  • 5.1.4. Specular light
  • 5.2. Physically correct light models
  • 5.3. Light models for transparent materials
  • 5.4. Shadows rendering
  • 5.5. Mesh-based 3D rendering with light equations
  • 5.5.1. Gouraud shading
  • 5.5.2. Phong shading
  • 5.5.3. Bump mapping
  • 5.5.4. 3D file formats
  • References
  • 6. Kinematics
  • 6.1. Rigid body animations
  • 6.1.1. Rotations with Euler angles
  • 6.1.2. Rotations around an arbitrary axis
  • 6.1.3. ModelView transformation
  • 6.2. Quaternions
  • 6.2.1. Spherical linear interpolation
  • 6.3. Deformable body animations
  • 6.3.1. Keyframes and inverse kinematics
  • 6.3.2. Clothes animation
  • 6.3.3. Particle systems
  • 6.4. Collisions in the physics engine
  • 6.4.1. Collision of a triangle with a plane
  • 6.4.2. Collision between two spheres, only one moving
  • 6.4.3. Collision of two moving spheres
  • 6.4.4. Collision of a sphere with a plane
  • 6.4.5. Collision of a sphere with a cube
  • 6.4.6. Separating axes theorem and bounding boxes
  • References
  • 7. Raytracing
  • 7.1. Raytracing complexity
  • 7.2. Raytracing with analytical objects
  • 7.3. VR challenges
  • References
  • 8. 2D transforms for VR with natural content
  • 8.1. affine transform
  • 8.2. homography
  • 8.3. Homography estimation
  • 8.4. Feature points and RANSAC outliers for panoramic stitching
  • 8.5. Homography and affine transform revisited
  • 8.6. Pose estimation for AR
  • References
  • 9. 3DoF VR with natural content
  • 9.1. Stereoscopic viewing
  • 9.2. 360 panoramas
  • 9.2.1. 360 panoramas with planar reprojections
  • 9.2.2. Cylindrical and spherical 360 panoramas
  • 9.2.3. 360 panoramas with equirectangular projection images
  • References
  • 10. VR goggles
  • 10.1. Wide angle lens distortion
  • 10.1.1. Wide angle lens model
  • 10.1.2. Radial distortion model
  • 10.1.3. VR goggles pre-distortion
  • 10.2. Asynchronous high frame rate rendering
  • 10.3. Stereoscopic time warping
  • 10.4. Advanced HMD rendering
  • 10.4.1. Optical systems
  • 10.4.2. Eye accommodation
  • References
  • 11. 6DoF navigation
  • 11.1. 6DoF with point clouds
  • 11.2. Active depth sensing
  • 11.3. Time of flight
  • 11.3.1. Phase from a modulated light source
  • 11.3.2. Structured light
  • 11.3.3. Phase from interferometry
  • 11.4. Point cloud registration and densification
  • 11.4.1. Photogrammetry
  • 11.4.2. SLAM navigational applications
  • 11.5. 3D rendering of point clouds
  • 11.5.1. Poisson reconstruction
  • 11.5.2. Splatting
  • References
  • 12. Towards 6DoF with image-based rendering
  • 12.1. Introduction
  • 12.2. Finding relative camera positions
  • 12.2.1. Epipolar geometry
  • 12.2.2. Rotation and translation from the essential and fundamental matrices
  • 12.2.3. Epipolar line equation
  • 12.2.4. Extrinsics with checkerboard calibration
  • 12.2.5. Extrinsics with sparse bundle adjustment
  • 12.2.6. Depth estimation
  • 12.2.7. Stereo matching
  • 12.2.8. Depth quantization
  • 12.2.9. Stereo matching and cost volumes
  • 12.2.10. Occlusions
  • 12.2.11. Stereo matching with adaptive windows around depth discontinuities
  • 12.2.12. Stereo matching with priors
  • 12.2.13. Uniform texture regions
  • 12.2.14. Epipolar plane image with multiple images
  • 12.2.15. Plane sweeping
  • 12.3. Graph cut
  • 12.3.1. binary graph cut
  • 12.4. MPEG reference depth estimation
  • 12.5. Depth estimation challenges
  • 12.6. 6DoF view synthesis with depth image-based rendering
  • 12.6.1. Morphing without depth
  • 12.6.2. Nyquist-Whittaker-Shannon and Petersen-Middleton in DIBR view synthesis
  • 12.6.3. Depth-based 2D pixel to 3D point reprojections
  • 12.6.4. Splatting and hole filling
  • 12.6.5. Super-pixels and hole filling
  • 12.6.6. Depth reliability in view synthesis
  • 12.6.7. MPEG-I view synthesis with estimated depth maps
  • 12.6.8. MPEG-I view synthesis with sensed depth maps
  • 12.6.9. Depth layered images-Google
  • 12.7. Use case I: view synthesis in holographic stereograms
  • 12.8. Use case II: view synthesis in integral photography
  • 12.9. Difference between PCC and DIBR
  • References
  • 13. Multi-camera acquisition systems
  • 13.1. Stereo vision
  • 13.2. Multiview vision
  • 13.2.1. Geometry correction for camera array
  • 13.2.2. Colour correction for camera array
  • 13.3. Plenoptic imaging
  • 13.3.1. Processing tools for plenoptic camera
  • 13.3.2. Conversion from Lenslet to Multi view images for plenoptic camera 1.0
  • References
  • 14. 3D light field displays
  • 14.1. 3D TV
  • 14.2. Eye vision
  • 14.3. Surface light field system
  • 14.4. 1D-II 3D display system
  • 14.5. Integral photography
  • 14.6. Real-time free viewpoint television
  • 14.7. SMV256
  • 14.8. Light field video camera system
  • 14.9. Multipoint camera and microphone system
  • 14.10. Walk-through system
  • 14.11. Ray emergent imaging (REI)
  • 14.12. Holografika
  • 14.13. Light field 3D display
  • 14.14. Aktina Vision
  • 14.15. IP by 3D VIVANT
  • 14.16. Projection type IP
  • 14.17. Tensor display
  • 14.18. Multi-, plenoptic-, coded-aperture-, multi-focus-camera to tensor display system
  • 14.19. 360° light field display
  • 14.20. 360° mirror scan
  • 14.21. Seelinder
  • 14.22. Holo Table
  • 14.23. fVisiOn
  • 14.24. Use cases of virtual reality systems
  • 14.24.1. Public use cases
  • 14.24.2. Professional use cases
  • 14.24.3. Scientific use cases
  • References
  • 15. Visual media compression
  • 15.1. 3D video compression
  • 15.1.1. Image and video compression
  • 15.2. MPEG standardization and compression with 2D video codecs
  • 15.2.1. Cubemap video
  • 15.2.2. Multiview video and depth compression (3D-HEVC)
  • 15.2.3. Dense light field compression
  • 15.3. Future challenges in 2D video compression
  • 15.4. MPEG codecs for 3D immersion
  • 15.4.1. Point cloud coding with 2D video codecs
  • 15.4.2. MPEG immersive video compression
  • 15.4.3. Visual volumetric video coding
  • 15.4.4. Compression for light field displays
  • References
  • 16. Conclusion and future perspectives.