Natanael,

I think the slightly more viable version of the rendering side is to send a combination of low res point cloud + low res large FOV frames for each eye with detailed (!) depth map + more detailed image streamed where the eye is focused (with movement prediction) + annotation on what features are static and which changed and where light sources are, allowing the headset to render the scene with low latency and continuously update the recieved frames based on movement with minimal noticable loss in detail, tracking stuff like shadows and making parallax with flawlessly even if the angle and position of the frame was a few degrees off.

  • Wszystkie
  • Subskrybowane
  • Moderowane
  • Ulubione
  • test1
  • Blogi
  • muzyka
  • Spoleczenstwo
  • fediversum
  • games@sh.itjust.works
  • krakow
  • FromSilesiaToPolesia
  • rowery
  • Technologia
  • slask
  • lieratura
  • informasi
  • retro
  • sport
  • nauka
  • Gaming
  • esport
  • Psychologia
  • Pozytywnie
  • motoryzacja
  • niusy
  • tech
  • giereczkowo
  • ERP
  • antywykop
  • Cyfryzacja
  • zebynieucieklo
  • warnersteve
  • Wszystkie magazyny