No. It isn’t abouf if I personally liked these services or not. This is cencorship. Even my mail service proton is banned in turkey it’s impossible to use the internet without a vpn here it’s a cencorship state.
If they ban a service without a guideline off what laws the companies should abide or either not giving a national alternative is totally fucked but otherwise I hope more countries start to limit what big companies can do within their territories.
Headsets in the thousand-dollar range are plenty good and still not selling. Take the hint. Push costs down. Cut out everything that is not strictly necessary. Less Switch, more Game Boy.
6DOF inside-out tracking is required, but you can get that from one camera and an orientation sensor. Is it easy? Nope. Is it tractable for any of the companies already making headsets? Yes, obviously. People want pick-up-and-go immersion. Lighthouses were infrastructure and Cardboard was not immersive. Proper tracking in 3D space has to Just Work.
Latency is intolerable. Visual quality, scene detail, shader complexity - these are nice back-of-the-box boasts. Instant response time is do-or-die. Some monocular 640x480 toy with rock-solid 1ms latency would feel more real than any ultrawide 4K pancake monstrosity that’s struggling to maintain 10ms.
Two innovations could make this painless.
One, complex lenses are a hack around flat lighting. Get rid of the LCD backlight and use one LED. This simplifies the ray diagram to be nearly trivial. Only the point light source needs to be far from the eye. The panel and its single lens can be right in your face. Or - each lens can be segmented. The pyramid shape of a distant point source gets smaller, and everything gets thinner. At some point the collection of tiny projectors looks like a lightfield, which is what we should pursue anyway.
Two, intermediate representation can guarantee high performance, even if the computer chokes. It is obviously trivial to throw a million colored dots at a screen. Dice up a finished frame into floating paint squares, and an absolute potato can still rotate, scale, and reproject that point-cloud, hundreds of times per second. But flat frames are meant for flat screens. Any movement at all reveals gaps behind everything. So: send point-cloud data, directly. Do “depth peeling.” Don’t do backface culling. Toss the headset a version of the scene that looks okay from anywhere inside a one-meter cube. If that takes longer for the computer to render and transmit… so what? The headset’s dinky chipset can show it more often than your godlike PC, because it’s just doing PS2-era rendering with microsecond-old head-tracking. The game could crash and you’d still be wandering through a frozen moment at 100, 200, 500 Hz.
I think the slightly more viable version of the rendering side is to send a combination of low res point cloud + low res large FOV frames for each eye with detailed (!) depth map + more detailed image streamed where the eye is focused (with movement prediction) + annotation on what features are static and which changed and where light sources are, allowing the headset to render the scene with low latency and continuously update the recieved frames based on movement with minimal noticable loss in detail, tracking stuff like shadows and making parallax with flawlessly even if the angle and position of the frame was a few degrees off.
Undoubtedly point-clouds can be beaten, and adding a single wide-FOV render is an efficient way to fill space “offscreen.” I’m just cautious about explaining this because it invites the most baffling rejections. At one point I tried explaining the separation of figuring out where stuff is, versus showing that location to you, using beads floating in a fluid simulation. Tracking the liquid and how things move within it is obviously full of computer-melting complexity. Rendering a dot, isn’t. And this brain case acted like I’d described simulating the entire ocean for free. As if the goal was plucking all future positions out of thin air, and not, y’know, remembering where it is, now.
The lowest-bullshit way is probably frustum slicing. Picture the camera surrounded by transparent spheres. Anything between two layers gets rendered onto the further one. This is more-or-less how “deep view video” works. (Worked?) Depth information can be used per-layer to create lumpen meshes or do parallax mapping. Whichever is cheaper at obscene framerates. Rendering with alpha is dirt cheap because it’s all sorted.
Point clouds (or even straight-up original geometry) might be better at nose-length distances. Separating moving parts is almost mandatory for anything attached to your hands. Using a wide-angle point render instead of doing a cube map is one of several hacks available since Fisheye Quake, and a great approach if you expect to replace things before the user can turn around.
But I do have to push back on active fake focus. Lightfields are better. Especially if we’re distilling the scene to be renderable in a hot millisecond, there’s no reason to motorize the optics and try guessing where your pupils are headed. Passive systems can provide genuine focal depth.
My suggestions are mostly about maintaining quality while limiting bandwidth requirements to the headset, wouldn’t a lightfield require a fair bit of bandwidth to keep updated?
(Another idea is to annotate moving objects with predicated trajectories)
Less than you might think, considering the small range perspectives involved. Rendering to a stack of layers or a grid of offsets technically counts. It is more information than simply transmitting a flat frame… but update rate isn’t do-or-die, if the headset itself handles perspective.
Optimizing for bandwidth would probably look more like depth-peeled layers with very approximate depth values. Maybe rendering objects independently to lumpy reliefs. The illusion only has to work for a fraction of a second, from about where you’re standing.
Alpha-blending is easy because, again, it is a set of sorted layers. The only real geometry is some crinkly concentric spheres. I wouldn’t necessarily hand-wave Silent Hill 2 levels of subtlety, with one static moment, but even uniform fog would be sliced-up along with everything else.
Reflections are handled as cutouts with stuff behind them. That part is a natural consequence of their focus on lightfield photography, but it could be faked somewhat directly by rendering. Or you could transmit environment maps and blend between those. Just remember the idea is to be orders of magnitude more efficient than rendering everything normally.
I thought the windows MR lineup filled that gap pretty well. Much cheaper than most of the other alternatives back then but it never really took off and MS has quietly dropped it.
Still $300 or $400 for a wonky platform. That’s priced better than I thought they were, but the minimum viable product is far below that, and we might need a minimal product, to improve adoption rates. The strictly necessary components could total tens of dollars… off the shelf.
One great benefit of the stop killing games initiative is the spotlight being put onto EULA’s. we all knew this is the wild shit EULA’s tried to dictate for years now, but now we have media actually reporting on it. even if SKG goes nowhere, at least we’ve had a revival of this massive consumer issue.
I feel we are gonna need to reach at least that 1.4M with all the companies being against it and actively lobbying. I bet they they are gonna be extremely nitpicky with the signatures to invalidate as many as possible.
In this context, Tekken and other fighting games have competitive and content seasons. Where over a year winners from large international events earn places in a final, and new characters/stages are released
After the final there is normally a very large update to the game which comes with new game mechanics and large balance changes, and the start of a content season pass. With enough time before the first tournament kicks off(street fighter is being weird this year though)
For Tekken that season patch dropped recently but was a massive let down(fuck up) and the community wasn’t happy with it at all.
Pretty much for as long as online games have gotten updates. DOTA kinda codified it with the Battle Pass system but WoW battlegrounds/arena had seasons way before that. They’ll wait and do content/balance updates in chunks and that effects the meta in waves defined as “seasons”.
It’s everywhere now. It can be weaponized FOMO or a clean way to provide regular novelty without being tied down to legacy content.
He is a leftist political twitch streamer, supports Ukraine (just so we can get the fake tankie allegations out of the way), recently did an interview with bernie and aoc, and is just overall a solid character. Now admittedly he had some bad takes over the years but he has apologized/clarified them later.
Why does every game have to have neon, cute, annoying bullshit cosmetics now. It’s a fucking war game. Even PUBG caved in and became an obnoxious dress-up game. Yea, really helps set the tone of abandoned, grim wasteland.
I don’t have it but I’ve seen admin/dev comments on this, with 1.0 release they plan to add offline singleplayer, and currently online is required on character select/launch, but losing internet connection during play will not kick you out or interrupt it.
I’m keeping my eye on it and will probably pick it up when offline singleplayer is added, but that being included is the deciding factor on if I’ll get it or not myself, since it’s just a promise for now and those can be broken.
Edit: apparently its already added according to other replies
Grim Dawn has one major issue IMO: it overuses passives and semi-passive abilities. When I play an action RPG, I want to actively push different buttons to do different things instead of optimizing my primary attack to do a bit of everything.
Funny thing: on your screenshot, it looks like the Play Offline button is greyed out. But it’s just a visual thing and you can totally click on it, right ?
Yes, I’m pretty sure its just because if you hit enter on that screen, it assumes and selects the Play Online button.
Play Online is actually a relatively newish feature as far as I understand. Multiplayer only came around earlier this year, before then your save was only offline.
Important to remind everyone that a LOT of your negative memories and feelings surrounding OW1 and 6v6 were due to the migraine magnets called 2cp. Literally a stand in the choke for 9 years and see who correctly uses every Q under the sun correctly first.
dexerto.com
Ważne