basically it just means it's using newer chip-making processes to make the chip smaller and faster. It's sort of a no-brainer that a new chip would use some updated processes and likely run faster than one made 7 or 8 years ago.
Smaller chips are a combination of faster and more power efficient than chips of a larger process size. The smaller the chips the less electrical impedance there is. That’s what makes processors hot. Less heat means less energy wasted and more potential to run the processor at a faster rate
its meant for people in the tech space that can cross compare numbers with on the market devices. some basic specs give you a ball park estimation of what you kind of expect. Albeit, this is from WCCFTech, which theyll post just about any rumor, so take with huge grain of salt.
for laymans, ill do some of the cross comparison now.
5nm is the fabrication process used in AMDs current top end gpus, and current generation GPUs. In apple terms, same process used on its A14/A15 (iPhone 12-14) and M1/M2 (all current macbook devices) chips (only difference between the two generations is bleeding edge vs matured process, but they are effectively the same size).
For comparisons sake, the 5nm process is used by Nvidia’s current generation RTX 4000 series gpus, but a special process for it (cusotmized basically for Nvidia). The clocks likely refer to CPU clocks so I will drop discussion of gpus here and move onto Nvidia’s CPU offerings.
Nvidia essentially only puts CPUs on its enterprise and developer parts (the Tegra line, which is how the Switch ended up using it). Nvidias “Thor” would be the only device using 5nm, but little is known about Thor so I would refer to last gen Orin, which have development boards already on the market (in the same way the Tegra X1 in the switch also has Development boards on the market).
Orins Wikipedia section on pure numbers, the 2 middle SKUS, the 2 NX models are the ones that would likely go into a switch due to their TDP (10-25W), as 10W is the typical handheld TDP and 15-25 tends to be the TDP of devices when “Docked”. Since last gen orin was capable of holding 2.2 Ghz CPU docked, then the switch SOC at least on paper, is closer to the full clocks when compared to the older Tegra X1 in the switch (which had it clocked to 1000 Ghz essentially, which is almost half of what the chip was designed for ~1800 which is seen in the commercially available Nvidia Shield TV). The CPU is a Arm Cortex A78, so I’d compare it to phones using it such as phones using the Snapdragon 888 cpu, but downclocked. Also forgot to put out there, Orins GPU is essentially similar to the Nvidia RTX 2050 mobile if you need some remote idea on how it would perform graphically.
Opinion post starts here:
Im on the boat who believes Nvidia is going to use Orin (or a varient of Orin just shrunk down to 5nm, as Orin is a 8nm product) as Nvidia does not like to do custom designs for any customer. It’s the reason why Apple for instance, dropped nvidia and the last Nvidia GPU used in an apple product i believe was the GTX 670. The choice sounds like a very Nintendo thing to do, because 1. Nintendo has a history of choosing the lower end part nowadays and 2. Nintendo prefers to have their consoles sold at profit and not at a loss, so theyre more inclined to pick the cheaper device of any option. Given that Orin is an early covid design, it makes sense of the timeline as it would kinda be similar to the switch (the Switch launched in 2017, used the Tegra X1 which was in devices in 2015). Orin was produced early 2022, and the next Switch would likely launch in 2024
Nothing until they actually announce something. Rumors aren’t to be trusted at all, Nintendo has a history of disappointing on specs and making up for it with interesting gameplay.
Big time. Their chassis design will dictate performance too. They will get the best chip they can in a cost budget and then thermal/battery limits will dictate where that chip actually lies.
The steam deck is cool and a great device but the Switch 2 will be sleeker and nintendo won’t settle for a 90min battery a whiney fan, and that has trade offs.
Switch 1 had a 720p screen with a 1080p max TV output. That’s approx a 2x increase in throughput.
With Switch 2 it’s expected to be a 1080p screen and a 4k output, that’s a 4x increase in pixel throughput. So a 2x output increase might not be adequate.
However, it is widely expected to have DLSS, which would greatly reduce that requirement.
Cost is going to be a big factor. Nintendo doesn’t want the best possible console. The want a good console, that they can get into as many hands as possible. Even a simple active dock is going to add £10 to the price.
Seeing most of the fixes go towards that instead of immersion kinda… You’d think they highlight those immersive aspects more. The mods are sure as hell all about immersion including working stock markets and such.
I was one of the linux players who was banned. It took them about 8 days for them to unban me in-game, and then it took another week for my EA ban history to change from “active” to “overturned,” with an email from EA. No apology, no sort of in-game compensation, just basically a “be thankful you can play again.”
~25% of the linux players are still banned. Some definitely cheated, some I would be shocked if they cheated, just considering the amount of hours and money they had put into their account, along with how enthusiastic they were about getting themselves (and others) unbanned. So shocked that I’m almost certain they’re false positives. Which makes me hesitant to keep playing - what if that happens to me? If they treat their Steam Deck and linux users this way, is that a game I really want to support?
Well Respawn explicitly enabling Easy Anti-Cheat’s Proton support (plus the game being Steam Deck Verified) is about as official as we’re gonna get for the vast majority of multiplayer games. I think it’s more of an issue around EA firing 99% of their QA testers…
Well the thing with those "enabled EAC on Linux to see where it gets us" is it's non-binding and non-commital. And it's made explicitely that way so that support cannot be demanded from Linux users unlike Windows users who are explicitely mentioned in the systems supported by the game.
We legally don't have any ground to be supported the same as Windows users.
Doesn't really matter, they don't need the switch to have bleeding edge performance, that isn't why it sells. It has to be affordable and using older processes helps achieve that.
No but it does need enough performance to be capable of running games in low quality modes. The Switch is so anemic that many big budget games are simply not even trying anymore as performant running can't be achieved without complete rewrites of engine code. So a better Switch that is at least a low spec gaming computer will enable more big games to many the effort of trying to support it.
A big issue with modern game developers is bad inefficient code. Compare Nintendo titles file size and performance to every other big game. I don’t think any AAA PC/PS6/XBOX? is going to run on the most powerful switch in 3 years time.
Headsets in the thousand-dollar range are plenty good and still not selling. Take the hint. Push costs down. Cut out everything that is not strictly necessary. Less Switch, more Game Boy.
6DOF inside-out tracking is required, but you can get that from one camera and an orientation sensor. Is it easy? Nope. Is it tractable for any of the companies already making headsets? Yes, obviously. People want pick-up-and-go immersion. Lighthouses were infrastructure and Cardboard was not immersive. Proper tracking in 3D space has to Just Work.
Latency is intolerable. Visual quality, scene detail, shader complexity - these are nice back-of-the-box boasts. Instant response time is do-or-die. Some monocular 640x480 toy with rock-solid 1ms latency would feel more real than any ultrawide 4K pancake monstrosity that’s struggling to maintain 10ms.
Two innovations could make this painless.
One, complex lenses are a hack around flat lighting. Get rid of the LCD backlight and use one LED. This simplifies the ray diagram to be nearly trivial. Only the point light source needs to be far from the eye. The panel and its single lens can be right in your face. Or - each lens can be segmented. The pyramid shape of a distant point source gets smaller, and everything gets thinner. At some point the collection of tiny projectors looks like a lightfield, which is what we should pursue anyway.
Two, intermediate representation can guarantee high performance, even if the computer chokes. It is obviously trivial to throw a million colored dots at a screen. Dice up a finished frame into floating paint squares, and an absolute potato can still rotate, scale, and reproject that point-cloud, hundreds of times per second. But flat frames are meant for flat screens. Any movement at all reveals gaps behind everything. So: send point-cloud data, directly. Do “depth peeling.” Don’t do backface culling. Toss the headset a version of the scene that looks okay from anywhere inside a one-meter cube. If that takes longer for the computer to render and transmit… so what? The headset’s dinky chipset can show it more often than your godlike PC, because it’s just doing PS2-era rendering with microsecond-old head-tracking. The game could crash and you’d still be wandering through a frozen moment at 100, 200, 500 Hz.
I think the slightly more viable version of the rendering side is to send a combination of low res point cloud + low res large FOV frames for each eye with detailed (!) depth map + more detailed image streamed where the eye is focused (with movement prediction) + annotation on what features are static and which changed and where light sources are, allowing the headset to render the scene with low latency and continuously update the recieved frames based on movement with minimal noticable loss in detail, tracking stuff like shadows and making parallax with flawlessly even if the angle and position of the frame was a few degrees off.
Undoubtedly point-clouds can be beaten, and adding a single wide-FOV render is an efficient way to fill space “offscreen.” I’m just cautious about explaining this because it invites the most baffling rejections. At one point I tried explaining the separation of figuring out where stuff is, versus showing that location to you, using beads floating in a fluid simulation. Tracking the liquid and how things move within it is obviously full of computer-melting complexity. Rendering a dot, isn’t. And this brain case acted like I’d described simulating the entire ocean for free. As if the goal was plucking all future positions out of thin air, and not, y’know, remembering where it is, now.
The lowest-bullshit way is probably frustum slicing. Picture the camera surrounded by transparent spheres. Anything between two layers gets rendered onto the further one. This is more-or-less how “deep view video” works. (Worked?) Depth information can be used per-layer to create lumpen meshes or do parallax mapping. Whichever is cheaper at obscene framerates. Rendering with alpha is dirt cheap because it’s all sorted.
Point clouds (or even straight-up original geometry) might be better at nose-length distances. Separating moving parts is almost mandatory for anything attached to your hands. Using a wide-angle point render instead of doing a cube map is one of several hacks available since Fisheye Quake, and a great approach if you expect to replace things before the user can turn around.
But I do have to push back on active fake focus. Lightfields are better. Especially if we’re distilling the scene to be renderable in a hot millisecond, there’s no reason to motorize the optics and try guessing where your pupils are headed. Passive systems can provide genuine focal depth.
My suggestions are mostly about maintaining quality while limiting bandwidth requirements to the headset, wouldn’t a lightfield require a fair bit of bandwidth to keep updated?
(Another idea is to annotate moving objects with predicated trajectories)
Less than you might think, considering the small range perspectives involved. Rendering to a stack of layers or a grid of offsets technically counts. It is more information than simply transmitting a flat frame… but update rate isn’t do-or-die, if the headset itself handles perspective.
Optimizing for bandwidth would probably look more like depth-peeled layers with very approximate depth values. Maybe rendering objects independently to lumpy reliefs. The illusion only has to work for a fraction of a second, from about where you’re standing.
Alpha-blending is easy because, again, it is a set of sorted layers. The only real geometry is some crinkly concentric spheres. I wouldn’t necessarily hand-wave Silent Hill 2 levels of subtlety, with one static moment, but even uniform fog would be sliced-up along with everything else.
Reflections are handled as cutouts with stuff behind them. That part is a natural consequence of their focus on lightfield photography, but it could be faked somewhat directly by rendering. Or you could transmit environment maps and blend between those. Just remember the idea is to be orders of magnitude more efficient than rendering everything normally.
I thought the windows MR lineup filled that gap pretty well. Much cheaper than most of the other alternatives back then but it never really took off and MS has quietly dropped it.
Still $300 or $400 for a wonky platform. That’s priced better than I thought they were, but the minimum viable product is far below that, and we might need a minimal product, to improve adoption rates. The strictly necessary components could total tens of dollars… off the shelf.
games
Najstarsze
Magazyn ze zdalnego serwera może być niekompletny. Zobacz więcej na oryginalnej instancji.