I think a bigger factor is the memory and resources reserved for the account system / store / online services, etc. But also, yeah the emulator might be more efficient on a few calls
But in essence, because they carry energy they must have momentum. It’s why they can impart momentum on what they hit, because momentum must be preserved.
They always start working on the next console as soon as one nears completion and release. It’s just a question of how far along with the plans they are. If they have an approximate quarter then the hardware is probably mostly ready
Not really, just let the game devs chose when to request that the console enforces stricter verification of accessories and otherwise just allow whatever
My suggestions are mostly about maintaining quality while limiting bandwidth requirements to the headset, wouldn’t a lightfield require a fair bit of bandwidth to keep updated?
(Another idea is to annotate moving objects with predicated trajectories)
I think the slightly more viable version of the rendering side is to send a combination of low res point cloud + low res large FOV frames for each eye with detailed (!) depth map + more detailed image streamed where the eye is focused (with movement prediction) + annotation on what features are static and which changed and where light sources are, allowing the headset to render the scene with low latency and continuously update the recieved frames based on movement with minimal noticable loss in detail, tracking stuff like shadows and making parallax with flawlessly even if the angle and position of the frame was a few degrees off.