I can’t see how Unity increasing prices is anti-competitive. If anything, them going brainfuck with fees can only serve to open the market to new players. Also, I doubt they’ll be able to make the fees retroactive, at least for games that won’t get updated.
Headsets in the thousand-dollar range are plenty good and still not selling. Take the hint. Push costs down. Cut out everything that is not strictly necessary. Less Switch, more Game Boy.
6DOF inside-out tracking is required, but you can get that from one camera and an orientation sensor. Is it easy? Nope. Is it tractable for any of the companies already making headsets? Yes, obviously. People want pick-up-and-go immersion. Lighthouses were infrastructure and Cardboard was not immersive. Proper tracking in 3D space has to Just Work.
Latency is intolerable. Visual quality, scene detail, shader complexity - these are nice back-of-the-box boasts. Instant response time is do-or-die. Some monocular 640x480 toy with rock-solid 1ms latency would feel more real than any ultrawide 4K pancake monstrosity that’s struggling to maintain 10ms.
Two innovations could make this painless.
One, complex lenses are a hack around flat lighting. Get rid of the LCD backlight and use one LED. This simplifies the ray diagram to be nearly trivial. Only the point light source needs to be far from the eye. The panel and its single lens can be right in your face. Or - each lens can be segmented. The pyramid shape of a distant point source gets smaller, and everything gets thinner. At some point the collection of tiny projectors looks like a lightfield, which is what we should pursue anyway.
Two, intermediate representation can guarantee high performance, even if the computer chokes. It is obviously trivial to throw a million colored dots at a screen. Dice up a finished frame into floating paint squares, and an absolute potato can still rotate, scale, and reproject that point-cloud, hundreds of times per second. But flat frames are meant for flat screens. Any movement at all reveals gaps behind everything. So: send point-cloud data, directly. Do “depth peeling.” Don’t do backface culling. Toss the headset a version of the scene that looks okay from anywhere inside a one-meter cube. If that takes longer for the computer to render and transmit… so what? The headset’s dinky chipset can show it more often than your godlike PC, because it’s just doing PS2-era rendering with microsecond-old head-tracking. The game could crash and you’d still be wandering through a frozen moment at 100, 200, 500 Hz.
I think the slightly more viable version of the rendering side is to send a combination of low res point cloud + low res large FOV frames for each eye with detailed (!) depth map + more detailed image streamed where the eye is focused (with movement prediction) + annotation on what features are static and which changed and where light sources are, allowing the headset to render the scene with low latency and continuously update the recieved frames based on movement with minimal noticable loss in detail, tracking stuff like shadows and making parallax with flawlessly even if the angle and position of the frame was a few degrees off.
Undoubtedly point-clouds can be beaten, and adding a single wide-FOV render is an efficient way to fill space “offscreen.” I’m just cautious about explaining this because it invites the most baffling rejections. At one point I tried explaining the separation of figuring out where stuff is, versus showing that location to you, using beads floating in a fluid simulation. Tracking the liquid and how things move within it is obviously full of computer-melting complexity. Rendering a dot, isn’t. And this brain case acted like I’d described simulating the entire ocean for free. As if the goal was plucking all future positions out of thin air, and not, y’know, remembering where it is, now.
The lowest-bullshit way is probably frustum slicing. Picture the camera surrounded by transparent spheres. Anything between two layers gets rendered onto the further one. This is more-or-less how “deep view video” works. (Worked?) Depth information can be used per-layer to create lumpen meshes or do parallax mapping. Whichever is cheaper at obscene framerates. Rendering with alpha is dirt cheap because it’s all sorted.
Point clouds (or even straight-up original geometry) might be better at nose-length distances. Separating moving parts is almost mandatory for anything attached to your hands. Using a wide-angle point render instead of doing a cube map is one of several hacks available since Fisheye Quake, and a great approach if you expect to replace things before the user can turn around.
But I do have to push back on active fake focus. Lightfields are better. Especially if we’re distilling the scene to be renderable in a hot millisecond, there’s no reason to motorize the optics and try guessing where your pupils are headed. Passive systems can provide genuine focal depth.
My suggestions are mostly about maintaining quality while limiting bandwidth requirements to the headset, wouldn’t a lightfield require a fair bit of bandwidth to keep updated?
(Another idea is to annotate moving objects with predicated trajectories)
Less than you might think, considering the small range perspectives involved. Rendering to a stack of layers or a grid of offsets technically counts. It is more information than simply transmitting a flat frame… but update rate isn’t do-or-die, if the headset itself handles perspective.
Optimizing for bandwidth would probably look more like depth-peeled layers with very approximate depth values. Maybe rendering objects independently to lumpy reliefs. The illusion only has to work for a fraction of a second, from about where you’re standing.
Alpha-blending is easy because, again, it is a set of sorted layers. The only real geometry is some crinkly concentric spheres. I wouldn’t necessarily hand-wave Silent Hill 2 levels of subtlety, with one static moment, but even uniform fog would be sliced-up along with everything else.
Reflections are handled as cutouts with stuff behind them. That part is a natural consequence of their focus on lightfield photography, but it could be faked somewhat directly by rendering. Or you could transmit environment maps and blend between those. Just remember the idea is to be orders of magnitude more efficient than rendering everything normally.
I thought the windows MR lineup filled that gap pretty well. Much cheaper than most of the other alternatives back then but it never really took off and MS has quietly dropped it.
Still $300 or $400 for a wonky platform. That’s priced better than I thought they were, but the minimum viable product is far below that, and we might need a minimal product, to improve adoption rates. The strictly necessary components could total tens of dollars… off the shelf.
It was bound to happen, given how the timeline advancement worked in MW5:Mercs. The story covered the Third and Fourth Succession Wars (2866-3025 and 3028-3030), the typical starting point for Battletech before the lore, politics, and tech get too complicated. The Clan Invasion (3049-3052) is the most iconic part of the timeline, I think.
I live to talk about Battletech, so hmu if you’ve got questions!
Sarna.net is the very good wiki for the BT universe.
They are indeed, to a degree, though it basically never comes up. There is an illustration of a Pleasure Circus in the A Time of War RPG companion book with a catgirl. I screenshotted my copy of the PDF here.
The mods are described on page 53, “functional tail and mobile ears”.
3132 Devlin Stone retires, and then still-unknown forces break the HPG network across the Sphere, and things fall apart. This is where the old Dark Age books and Clickey-tech minis come in when WizKids took over the license from FASA and did a time jump. Since then, Catalyst took over and is still filling in the gaps.
War starts again. The Draconis Combine invades FedSuns. Wolves and Jade Falcons attack Tharkad. Alaric Ward becomes Khan of Wolf. And somehow, Devlin Stone Returned.
The Republic of the Sphere has a still-unexplained bullshit technology called The Wall that blocks jump ships from entering their space.
3151 Wolves under Alaric Ward and Falcons under Malvina Hazen figure out how to get last The Wall and race towards Terra to defeat the remnants of Republic of the Sphere. An ilClan is declared, fulfilling the goal of the original Clan Invasion.
That’s where we are.
Your main sourcebooks are Era Report:3145, then Shattered Fortress, and lastly IlClan.
This article has a pretty negative slant, but is anyone actually sad they’re going? I’ve been playing on and off since 2014, and I never have enough shards. My friends that have played almost nonstop since launch constantly have more than they could ever spend, even if they masterworked every piece of gear in their collection. Seems like a good change, honestly.
It’s definitely a good change. I have so many legendary shards that the currency might as well not exist, but my newer friends are constantly running out. It’s a good change for everyone.
I have over 50k shards that I will never use so it’ll be sad to see the high number go away but other than that, I couldn’t imagine a soul being bothered by the legendary shard change. It’s honestly a great thing to help simplify parts of the games economy for new and returning players
I have way more than I could possibly need, and really don’t mind their idea to remove them. What I am a bit “salty” over is the lack of currency exchange for their removal.
I would love to trade in a bunch to get prisms, ascendant alloys or the one to get the enhanced perks (I forget the name). But Bungie still has their arbitrary limits in place. Again, I support the removal, but I did still grind to accumulate them, so not being able to turn them into something useful feels like a waste.
I played Starfield on the Game Pass and was bored to tears after four hours. It made me want to explore space, but everything is so half-baked in Starfield that it drove me to reinstall and start playing more No Man’s Sky.
It will ask a small fee for every install, on top of the royalties. The issue seems that for small studios this fee is not feasible, and it seems that also pirated games and demos would count
It’s only once they’ve taken in like $200k in revenue btw. Demos don’t count, neither do game pass subscriptions or games bought via humble bundle etc.
It was actually true that multiple installs per user would count multiple times, but Unity rolled back that decision not long after announcing it. However, install bombs will still be possible, I seriously doubt Unity has a fool proof way to accurately identify the same user over multiple installs if the user is reinstalling maliciously to cost the developer money.
And? It would take a trivial amount of effort to spin up VMs and install the game on each. If I immediately tear the VM down after, I’m sure my cost would be covered by free AWS credits.
But also, what entitles them to even a portion of the games proceeds? Adobe doesn’t get a cut for every digital piece you create. Dundermifflin doesn’t get a cut everytime you write a new contract. That’s absolute bullshit and they should get a fine for even thinking they’re allowed to be this big and change the rules like this. That’s a monopoly mindset.
I guess it really depends how it’s done. I don’t think an actual cut of the proceeds is fair either, but stuff like having a low entry point and scaling your tool’s cost a bit according to the project success can be a good idea.
That said after they’d try to pull a stunt like they did I definitely wouldn’t trust them anymore.
Sims 4 base game is actually already free on steam. I hear from a good friend, not me personally, that it has some great porn mods on the lovers lab site. Some bad mods, too.
Xmen on Sega genesis. At one point you have to literally reset the console. I was 10 and didn’t understand that’s what it was telling me to do. No game had ever done that, and prof x was breaking the 4th wall telling the player to do that. The game never broke the 4th wall otherwise. I didn’t understand until a decade later when I read it on some listicle.
I’m pretty sure I soft locked my New Vegas save a good few years ago, or at least locked myself out of the ending I wanted. I was going for the Yes-Man ending, but I wanted to let House upgrade the robots first. I let him do it and then killed him to get the platinum chip back, but turns out he didn’t have it on him. Without any way to give the chip to Yes-Man, I was SoL. I think you can still complete the game with a couple other factions, but I know for sure that I already pissed The Legion off so I don’t know how many options are left. Maybe I’ll dig up that save somehow and try again.
Also, In the original Thief games (Thief: The Dark Project, Thief: Gold, and Thief 2), there was a brief fadeout period between dying and getting kicked to the game over screen. This death state didn’t lock the controls, so you could still move around, interact with objects, and, critically, quicksave. If you happened to quicksave at the moment of your death, there was nothing you could do to get out of dying. There was only one quicksave slot and no autosaves, so if you weren’t manually saving every now and then, you had to start the entire game over. Learned to make occasional checkpoint saves the hard way.
The death mechanic did lead to at least one hilarious fan mission where you had to get through a door and complete the mission after falling to your death.
Yes Man is the failsafe ending, so you should always be able to do it I’m pretty sure. Killing Yes Man should work like killing Victor and he just jumps to a new body if I remember correctly.
The idea that you can control capitalists with ‘your wallet’ is flawed. Its never worked that way. Capitalism is controlled by regulations, or its not and you get crony capitalism.
games
Ważne
Magazyn ze zdalnego serwera może być niekompletny. Zobacz więcej na oryginalnej instancji.