I haven’t played on my Switch or really touched in in a few months now, since around the Yuzu news. I’m too disgusted by them to want to do it, and shortly before that, I had decided to mod it and not buy their games again. Now I’m 100% certain I’m going to do that. If I really want to give money to some indie game I like, I’ll buy it on Steam.
Also don’t buy this because like the previous flight simulator, they will restrict you to slow Microsoft servers so your first 150 hours in-game will actually just be downloading additional content at whatever speed Microsoft has limited their servers. 5MBPs? Maybe sometimes!
That‘s certainly something they‘re gonna want to fix. I hope DF and GN pick up on this, seems like free views and I‘d love to hear what they‘ve got to say on the matter.
Edit: Also wondering if it‘s the app or if the performance hit disappears when you disable the overlay. Only flew over the article to see what games are affected how badly so mb if that’s mentioned.
Edit 2:
HUB‘s Tim tested it and found that it‘s the overlay or rather the game filter portion of the overlay causing the performance hit. You can disable this part of the overlay in the app‘s settings, or disable the overlay altogether.
He also found that this feature wasn’t impacting performance on GeForce Experience, so it’s very likely a bug that’s gonna be fixed.
To clarify: Using game filters actively can have an impact on either, but right now even when not actively using them, they cause a performance hit just by the functionality being enabled; a bug.
The only outlier where just having the app installed hit performance was the Harry Potter game.
I don’t think you’re understanding. The testing they did was presumably fine and the performance hit is probably unacceptable. But mentioning but not testing the scenarios of
Here’s the quote, for people allergic to reading the update in the article.
Update: Nvidia sent us a statement: “We are aware of a reported performance issue related to Game Filters and are actively looking into it. You can turn off Game Filters from the NVIDIA App Settings > Features > Overlay > Game Filters and Photo Mode, and then relaunch your game.”
We have tested this and confirmed that disabling the Game Filters and Photo Mode does indeed work. The problem appears to stem from the filters causing a performance loss, even when they’re not being actively used. (With GeForce Experience, if you didn’t have any game filters enabled, it didn’t affect performance.) So, if you’re only after the video capture features or game optimizations offered by the Nvidia App, you can get ‘normal’ performance by disabling the filters and photo modes.
So, TomsHW (is at least claiming that they) did indeed test this, and found that its the filters and photo mode causing the performance hit.
Still a pretty stupid problem to have, considering the old filters did not cause this problem, but at least there’s a workaround.
… I’m curious if this new settings app even exists, or has been tested on linux.
They basically come overclocked right out of the factory these days, given how hard Intel pushes them just to make their numbers look bigger.
Next time I build a PC, I plan to spend extra on hardware that can run games decently while producing as little heat as possible. My current PC is like a space heater when it’s running and it’s unbearable to play games on it for any extended periods during the summer months.
Next time I build a PC, I plan to spend extra on hardware that can run games decently while producing as little heat as possible. My current PC is like a space heater when it’s running and it’s unbearable to play games on it for any extended periods during the summer months.
The only reason I went for an 80+ Platinum PSU instead of an 80+ Bronze PSU is to make it generate less heat (and the fact that the platinum had a really nice price at the time). Doing it for power savings isn’t worth it, but getting a cooler case is nice. tomshardware.com/…/what-80-plus-levels-mean,36721…
The PC I made before my current was my first SFF, and I had a lot of fun figuring out heat loads and what it means to build SFF. The Ryzen 7 3700x (65w) was just at the top of what my small Noctua CPU cooler could manage in the fractal node 202.
Then I got a steam deck and, holy heckers, that’s some efficient gaming. If you play docked, that would be the ultimate low power/low heat gaming device (handheld you are so close to the source of heat that it’s still hot in the summer) I think it draws 35w on peak load over time with 40w short burst max. For the whole package, incl screen! Captain Dozerman(Bill Hader) from B99 would have been proud. Efficiency, efficiency, efficiency!
Interestingly, Gorelkin emphasized that the console should not merely serve as a platform for porting old games but also for popularizing domestic video games.
Apparently state-subsidized efforts have not yet popularized appropriate domestic games on their own.
Because GTA has 99.99% of the data on disk. MFS2024 is trying to keep the install size from being 500 GB, so rather than having the whole world on your PC they are streaming it in. GTA doesn’t do that.
Because everything has to fit on the average game PC or console storage, they have some pressure to optimize data size. A simulator that streams everything have less constraints on data size, less motivation to keep size reasonable.
GTA 5’s entire game world is just the San Andreas area. The point of MFS2024 is that you can literally see your real world house from the air. It’s so, so, so much larger than GTA 5’s < 100 km2 it’s a totally unfair comparison.
I’m not suggesting putting the whole world on a 120GB disk.
That being said, most of the textures and building geometries used for San Andreas may be reused for other cities in the west coast. Areas between cities that have a lower density could take much less space.
So doubling the physical area covered doesn’t necessarily require doubling the amount of data. But the bandwidth usage from MSFT’s simulator suggest they are not reusing data when they could be.
Yeah, you’re not getting the goal. They are using actual data from the areas you’re flying over. You’re suggesting they look at it like a game, where the reuse textures and models. Their goal is the opposite, to have the game look like the real world.
Even in MFS2020 my house roughly look like my house, and the taller structures look like they do in my city, they aren’t just skyscraper#93781 and bridge#12381, they are all unique structures that uses the bing maps data to look just like it does in real life. The landmarks in my city are my cities landmarks. They aren’t just generic buildings.
I happen to know a bit about game and simulators. From a plane’s point of view, houses dont look unique. A small number of models is enough to fairly represent most houses. There may be a minority of structures that are really unique (stadiums, bridges, landmarks, …) but the vast majority of buildings aren’t unique. Even if two building have different heights, it’s possible to reuse textures if they’re built from the same material.
MSFT appears to have designed the simulator by considering every building is unique, but if they compared buildings and textures, ideally using automation, they would see there’s a massive amount of duplication.
The fact that you started by comparing it to GTA 5 makes it obvious you don’t know, but okie dokie, at this point I have to assume you’re just trolling.
Apples and oranges. GTA V has a small, entirely hand-built world. It’s just 80 square kilometers and was meant to fit onto two DVDs / one Blu Ray Disk. Real-world Los Angeles, which this is based on, is 1,210 square kilometers.
This Flight Simulator on the other hand covers the entire planet. If we are just going by land area, that’s 510.1 million square kilometers. It’s using a combination of satellite and aerial photography, radar maps, photogrammetry (reconstructing 3D objects - buildings and terrain in this case - from photos), Open Street Map and Bing Maps data, as well as hand-built and procedurally generated detail. There’s also information on the climate, live weather data, animal habitats (to spawn the right creatures in each part of the world), etc. pp. We are about two petabytes of data, which is an unfathomable amount outside of a data center.
You can not optimize your way out of this. The developers have the ambition to create the most detailed 1:1 virtual facsimile of this planet. There is no other way of achieving this goal. You can not store two petabytes of data on a consumer PC at the moment, you can not compress two petabytes of data to the point that they are being reduced to a couple hundred gigabytes and if your goal is accuracy, you cannot just reuse textures and objects from one city for another. That’s what every prior version of this flight simulator did, but if you remember those, the results were extremely disappointing, even for the time.
By the way, if you don’t have an active Internet connection, Flight Simulator 2020 (and 2024, if I’m not mistaken) will still work. They’ll just do what you’re suggesting, spawn generic procedurally generated buildings and other detail instead (in between a handful of high detail “hero” buildings in major cities), based on low-res satellite photography and OSM data, which is relatively small in size even for the whole planet and tells the program where a building and what its rough outline and height might be - but not what it actually looks like. Here’s a video from an earlier version of FS 2020 that shows the drastic difference: youtu.be/Z0T-7ggr8Tw
It is worth stressing that you will see this kind of relatively low detail geometry even with an Internet connection any time you’re flying in places where the kind of high quality aerial photography required for photogrammetry isn’t available of yet. FS 2020 has seen continuous content updates however, with entire regions being updated with higher quality photogrammetry and manually created detail every couple of months - and FS 2024 will receive the same treatment. I am generally not a fan of live-service games, but this is an exception. It makes the most sense here.
The one major downside is that eventually, the servers will be shut down. However, since you can choose to - in theory - cache all of the map data locally, if you have the amount of storage required, it is actually possible to preserve this data. It’s far out of reach for most people (we are talking low six figures in terms of cost), but in a few decades, ordinary consumer hardware is likely going to be able to store this amount of data locally. The moment Microsoft announces the shutdown of this service, people with the means will rush to preserve the data. Imagine what kind of amazing treasure this could be for future generations: A snapshot of our planet, of our civilization, with hundreds of cities captured with enough detail to identify individual buildings.
Thanks for the interesting details. Glad to see there’s an offline version that disables photogrammetry.
The church in england is a good example where a a generic rectangle building model doesn’t work. They could improve the offline version by adding a church model in the set of offline models, and use it for 90% of church in western Europe.
A fully realistic model of every single building may be cool for architects, future historians, city planners, gamers that are sightseeing… but don’t help much when learning to pilot. Having a virtual world that look similar to the real one, with buildings of the right size and positions, landmarks, and hero buildings is good enough, and doesn’t require that much resources. There are others parts of flight simulators that are more important to work on.
Small clarification: Satellite imagery is only used where higher quality aerial photography isn’t available. For cities with full photogrammetry, a plane needs to fly over the whole area twice (the second time at 90 degrees relative to the first pass) in order to capture buildings from all sides.
So long games don’t force it to be on, then whatever. Although I expect it to become a requirement for a usable framerate for next gen games. Big developers don’t want to optimize anymore and upscaling/framegen technologies are a great crutch.
Of course nobody want to optimize. Its boring. It messes up the code. Often reqires one to cheat the player with illusions. And its difficult. Not something just any junior developer can be put to work on.
You’d expect that when Raytracing/Pathtracing and the products that drive it have matured enough to be mainstream, devs will have more time for that.
Just place your light source and the tech does the rest. It’s crazy how much less work that is.
But we all know the Publishers and shareholders will force them to use that time differently.
Eh, in my experience that’s not how development works. With every new tool to improve efficiency, the result is just more features rather than using your new found time to improve your code base.
It’s not just from the publishers and shareholders either. Fixing technicial debt issues is hard, and the solutions often need a lot of time for retrospection. It’s far easier to add a crappy new feature ontop and call it a day. It’s the lower effort thing to do for everyone, management and the low down programmers alike.
New features is what sells a product, so not far from my original point, I’d say.
Definitely a bit of both, and improving code is never the highest priority, yeah.
Who are you directing the comments to? The dev company or individuals? I disagree in the latter. On the former I still think it’s a mischaracterizatuon of the situation. If the choice is to spend budget in scope and graphics at the expense of optimization that doesn’t seem a hard choice to make.
I might have generalized a bit too much. Of course some individual devs love the challenge of getting better performance out of anything.
But not enough of them that every dev company has an army of good developers who know how to do this with the expertise they are needing performance for. Theres a lot of ways one dev can specialize: gpu api (directx/opengl/vulcan/etc), os, game engine, disk access, database queries. One who knows graphic api well might not know how to optimize database queries. It doesnt help throwing money at the problem either, those who know this stuff usually already have good jobs. So you might have no choice than to use the devs you have, and the money you have budgeted, to release the game within contracted time.
DLSS 3.5 for example comes with that new AI enhanced RT that makes RT features look better, respond to changes in lighting conditions faster, and still remain at pre-enhanced levels of performance or better.
And Reflex fixes a lot of the latency issue.
A lot of games don’t use the latest version of DLSS though, so I don’t blame you if you have a bad experience with it.
From what I’ve understood of this - it’s transpiling the x86 code to ARM on the fly. I honestly would have thought it wasn’t possible but hearing that they’re doing it - it will be a monumental effort, but very feasible. The best part is that once they’ve gotten CRT and cdecl instructions working - actual application support won’t be far behind. The biggest challenge will likely be inserting memory barriers correctly - a spinlock implemented in x86 assembly is highly unlikely to work correctly without a lot of effort to recognize and transpile that specific structure as a whole.
I thought FAT binaries don’t work like that - they included multiple instruction sets with a header pointing to the sections (68k, PPC, and x86)
Rosetta to the best of my understanding did something similar - but relied on some custom microcode support that isn’t rooted in ARM instructions. Do you have a link that explains a bit more in depth on how they did that?
Fat binaries contain both ARM and x86 code, but I was referring to Rosetta, which is used for x86-only binaries.
Rosetta does translation of x86 to ARM, both AOT and JIT. It does translate to normal ARM code, the only dependency on a Apple-specific custom ARM extension is that the M-series processors have a special mode that implements x86-like strong memory ordering. This means Rosetta does not have to figure out where to place memory barriers, this allows for much better performance.
So when running translated code Apple Silicon is basically an ARM CPU with an x86 memory model.
Not necessarily a bad thing if they can make the prices lower, if most people end up buying cheaper but adequate hardware developers will have an incentive to make their games work with that hardware. We have seen what games with NVidia partnerships ended up with in terms of bugs with ATI GPUs but aren’t those problems less severe now?
Definitely not a bad thing, I’d love more competition in the mid-range because so far AMD GPUs recently have basically been slightly worse than Nvidia at slightly lower prices. I still think GPUs like the 4060 are way too expensive, so if AMD actually undercuts them it could be nice for everyone.
God that makes me feel old. Yeah, Garry’s Mod is literally named that because it’s his mod. He started making it as a teen, released it for free for a long time, and then leveraged it into a job and company (FacePunch, they also made Rust a lot later).
I think Garry’s Mod 10 was the first paid release, and he and his team have just been updating the paid version since.
The game’s official site used to just be his blog, and I believe he still posts semi-regularly about game development, running a company, and life stuff. Always seemed to be a real stand up guy.
There will be a single digit number of games for it and all of them will require subscriptions to play and half of them will be canceled +/- 2 months from launch and then impossible to play because the servers are shut down.
How many PS5 Pros will be sold at retail, taken out of the package, hooked up to a TV, and never play a game that you could play on a normal PS5 or even a PS4?
tomshardware.com
Ważne