Natanael,

I think the slightly more viable version of the rendering side is to send a combination of low res point cloud + low res large FOV frames for each eye with detailed (!) depth map + more detailed image streamed where the eye is focused (with movement prediction) + annotation on what features are static and which changed and where light sources are, allowing the headset to render the scene with low latency and continuously update the recieved frames based on movement with minimal noticable loss in detail, tracking stuff like shadows and making parallax with flawlessly even if the angle and position of the frame was a few degrees off.

  • Wszystkie
  • Subskrybowane
  • Moderowane
  • Ulubione
  • test1
  • Spoleczenstwo
  • lieratura
  • muzyka
  • rowery
  • sport
  • Blogi
  • Technologia
  • Pozytywnie
  • nauka
  • FromSilesiaToPolesia
  • fediversum
  • motoryzacja
  • niusy
  • slask
  • informasi
  • Gaming
  • esport
  • games@sh.itjust.works
  • Psychologia
  • tech
  • giereczkowo
  • ERP
  • krakow
  • antywykop
  • Cyfryzacja
  • zebynieucieklo
  • kino
  • warnersteve
  • Wszystkie magazyny