I can see it being loved by the certain communities, like deep rock and such, but come on, you have the biggest and most popular fantasy world ever made and this is what you get out of it?
The company that owns the licensing to lotr is currently in a phase of will license the ip out to anyone mode. So whilst you might think there’s someone out there making decisions on what to do with the property, it’s more that a bunch of low budget companies are asking for the license and getting it for cheap
You’re getting downvoted but this will be correct. DLSSFG looks dubious enough on dedicated hardware, doing this on shader cores means it will be competing with the 3D rendering so will need to be extremely lightweight to actually offer any advantage.
I wouldnt say compete as the whole concept of frame generation is that it generates more frames when gpu resouces are idle/low due to another part of the chain is holding back the gpu from generating more frames. Its sorta like how I view hyperthreads on a cpu. They arent a full core, but its a thread that gets utilized when there are poonts in a cpu calculation that leaves a resouce unused (e.g if a core is using the AVX2 accerator to do some math, a hyperthread can for example, use the ALU that might not be in use to do something else because its free.)
It would only compete if the time it takes to generate one additional frame is longer than the time a gpu is free due to some bottleneck in the chain.
You guys are talking about this as if it’s some new super expensive tech. It’s not. The chips they throw inside tvs that are massively cost reduced do a pretty damn good job these days (albit, laggy still) and there is software you can run on your computer that does compute based motion interpolation and it works just fine even on super old gpus with terrible compute.
Yeah, it does, which is something tv tech has to try and derive themselves. Tv tech has to figure that stuff out. It’s actually less complicated in a fun kind of way. But please do continue to explain how it’s more compute heavy
Also just to be very clear, tv tech also encompasses motion vectors into the interpolation, that’s the whole point. It just has to compute them with frame comparisons. Games have that information encoded into various gbuffers so it’s already available.
People made the same claim about DLSS 3. But those generated frames are barely perceptible and certainly less noticeable than frame stutter. As long as FSR 3 works half-decently, it should be fine.
And the fact that it works on older GPUs include those from nVidia really shows that nVidia was just blocking the feature in order to sell more 4000 series GPUs.
You aren't going to use these features on extremely old GPUs anyways. Most newer GPUs will have spare shader compute capacity that can be used for this purpose.
Also, all performance is based on compromise. It is often better to render at a lower resolution with all of the rendering features turned on, then use upscaling & frame generation to get back to the same resolution and FPS, than it is to render natively at the intended resolution and FPS. This is often a better use of existing resources even if you don't have extra power to spare.
because I think the post assumes that the GPU is always using all of its resources during computation when it isn’t. There’s a reason why benchmarks can make a GPU hotter than a game can, as well as the fact that not all games pin the gpu performance at 100%. If a GPU is not pinned at 100%, there is a bottleneck in the presentation chain somewhere. (which means unused resources on the GPU)
I still think it’s a matter of waiting for the results to show up later. AMD for RDNA3 does have an AI engine on it, and the gains it might have in FSR3 might be different in the same way XeSS does with branching logic. Too early to tell given that all the test suite tests are RDNA3, and that it doesn’t officially launch til 2 weeks from now.
Frame generation is limited to 40 series GPUs because Nvidias solution is dependant on their latest hardware. The improvements to DLSS itself and the new raytracing stuff work on 20/30 series GPUs. That said FSR 3 is fantastic news, competition benefits us all and I’d love to see it compete with DLSS itself on Nvidia GPUs.
Used to love this game back in the days of Xbox 360 Arcade. I remember asking my parents to help me buy Microsoft Points to buy DLC or Arcade games. Remember those? Ah, nostalgia.
They’ve also stated fsr3 will continue to be open source, and previous versions have been compatible with Vulkan on the developer end at least. I can’t find though if this new hyper rx application running it agnostic to any developer integration is supporting Vulkan though. Guess we’ll find out when it’s released shortly here.
It can probably be integrated into anything like FSR 1 and 2. Valve can just update their Gamescope compositor to use it instead of FSR 1. I wonder though, how the image quality is going to be like when upscaling/generating frames based on such small input image resolutions. Previous versions of FSR really only mase sense for around-1080p upwards.
While I wish CDPR had pulled the band-aid and canceled (with refund or free upgrade to next gen) the PS4 and Xbox Series platforms, my controversial opinion is that this game has been GoTY on PC since day one. Plenty of my favorite games had rough launches (Vampire: The Masquerade Bloodlines, No Man’s Sky, Witcher 3, Skyrim - hell, I even lost an hour to the cross-save bug in Baldurs Gate 3), but it became a meme to hate on CP2077, and I understand why the devs claim to this day that the game deserves more credit.
I understand that players are tired of broken launches, and I agree that devs should be more cautious about what features they show in alpha/beta stages to manage hype, but I think the oversized backlash this game received stopped or delayed a large swathe of gamers from experiencing a truly great game and gave the devs way more stress than they had earned.
Honestly you’re not wrong, the launch of the game was actually horrible. The game was good in theory but was halfway executed and shoved into our faces as something great when it obviously wasn’t when they shipped it.
Did you somehow get a different game? Or maybe you somehow avoided all the bugs everyone else experienced. Still even if worked perfectly, game of the year seems a bit much.
Even now, years later, there are still unfixed bugs. I have a game where there’s one mission showing up in a building that is impossible to enter. I even started the game clean from the beginning a year ago and hit the same damn bug again.
Others have reported it too, so I’m not the only one.
I didn’t have any gamebreaking bugs, but had soooo so many “how the fuck did this pass quality control?” bugs. Most of them were pretty funny, like the time I didn’t understand how the cyberpsycho quests worked, and tried to take the unconscious body with me.
The game system did not like having that body in the trunk of my car, with hilarious Dali-esque consequences.
Aside from that, the deep systems that were promised were extremely shallow; the onscreen map was fucked, too small to see turns coming (pathing too CPU-intensive when zoomed out?); the onscreen HUD still last time I played was too small to read on a 4k screen; the car handling / driving is still atrocious (at least, last time I played). It is a fun game, especially for those picking it up now. Mods make it much more fun.
Yes, it was far far more than just a buggy release. Even if the game was originally released in the state it’s in now people would still have been pissed. The bugs were just a distraction.
Yeah, the only other game that was so brazenly lied about before launch was No Man’s Sky and to their credit Hello Games actually implemented everything that was promised back then now and then some, for free.
I think a lot of the negativity also comes from misunderstanding what the game is.
Just like you, I played the game on release (on PC) and it is for me one of the best games of all time for one specific reason: immersion and story. That’s exactly what I expected from CDPR after Witcher 3 (another story and immersion focused game) and that’s exactly what I got. I didn’t expect a company known for their story focus and relatively weaker gameplay to deliver a game focused on gameplay or sandbox elements.
I think a lot of people wanted something that CDPR was never going to deliver, but it seems like Phantom Liberty is leaning more into the sandbox that people wanted and (unsurprisingly) didn’t get at release.
You can’t say that when for literally YEARS CDPR advertised the game as being exactly that. A futuristic, play-who-you-want RPG sandbox. Instead we practically got a Far Cry clone with light RPG elements. They just quietly stopped advertising it as such.
But people remember. Just because you didn’t expect it yourself doesn’t mean it wasn’t advertised as such.
I see this argument a lot when people criticize this game and it seems like you are all suffering from some Mandela fever dream. Or maybe you just didn’t watch any of the trailers and dev commentary?
CDPR literally marketed the game as and constantly raved about how the city was the most immersive sandbox possible. With fully scripted AI living full lives and reacting realistically to you. A full police system that would enforce harsh punishments. They wanted players to believe it was going to be the same level of interactivity with the world as games like GTA or Watch Dogs.
Idk what made you think they were “never going to deliver” that when it was constantly being talked about by the PEOPLE WHO MADE THE GAME.
“Well, CDPR has never made a game like that and the Witcher series wasn’t like that so it doesn’t matter that they spent years telling everyone that it was going to be that way! It’s you’re fault for not knowing!”
If by rough launch you mean pretty much omitted majority of things they said will be in the game, then yes. Rough launch. And here I am worrying when indie devs don’t have enough time to fix minor bug in their games.
Broken launches aren’t something to be tired of, imagine if everytime you bought a car it has to be recalled. Every Sandwich you ate gave you food poisoning. All of the tools you buy snap the first time they undergo a few foot pounds of torque.
This market runs on money, and the only way to combine them to put out functional games is to refuse to pay for sub par products. Anything less and they’ll go “oh shit, we’re still making money. I guess we don’t need to provide anything beyond a tech demo to rake in 70 bucks + cosmetics, etc.”
They absolutely lied about their systems. Fully functional crowds and AI? Bullshit. A cop system that actually works? Bullshit. They pop out of a hole in the ground and insta-gank you. If they can’t even replicate decade old police technology that games like GTA IV managed to get right, then they should have just given you the “DONT KILL CIVILIANS” with a game over screen instead of some half-baked system of having police that spawn behind and instantly kill you. A total waste of a mechanic that they couldn’t even fully implement or commit to.
While the game was mostly broken on last gen consoles, I have a fairly powerful desktop and still got game breaking bugs on occasion with only mildly infuriating ones fairly frequently. To say the game was GOTY on day one is absolutely mind-boggling. Your standards for games are clearly through the floor on this one if you really consider it GOTY on release lol I wouldn’t even consider it that good NOW and I’ve just recently 100%'ed the damn game.
I’ll agree that it got more hate than it deserves, but let’s not swing the pendulum the other way and pretend this is some nugget of gold that people just didn’t see. It was broken and got treated as such.
I’ve been a huge fan of CDPR since the witcher 2. I love the world of cyberpunk. The combination seemed like a dream come true. So, I deliberately held out on absolutely any and all spoilers. It was not easy.
I bought a new computer for the game. I booked a two week vacation to play the game.
And, I mostly enjoyed it. It was a little bit underwhelming, and some systems seemed a bit contrived. But, it was still fun, with some amazing city design. Definitely not something that I would call GoTY.
Then, I looked at all the outrage, and I looked at the promotional material. And, oh boy, did that seem fraudulent. Like, “how come no one went to jail”-fraud. Pretty straight up lying about every part of the game. And why? I don’t know, but it seriously stained my view of CDPR.
I played through it at launch on my lower end gaming laptop (1050 GPU) that I had at the time. With some fiddling, and basically turning everything to lowest I got it to just about playable framerates.
Massively enjoyed the game and its universe. I hit a few bugs but nothing that was hugely game breaking, at least nowhere as bad as people were saying. I also managed my expectations knowing my hardware at the time was low-end/dated.
Then I saw footage of the game being played on base tier PS4 and Xbox One hardware and holy shit, if I’d bought it on either of those (especially Xbox), I’d have been furious. The game was not ready and should never have been released for those consoles. It clearly needed at least PS4 Pro or One X to even be remotely playable at launch.
Never ever ever buy a game until after reviews come out. It’s not worth it. It doesn’t matter if it’s from a legendary game studio, they can find all of the developers, they could fire all of the managers, and still be called the same thing. The name is no guarantee of anything.
Pre-ordering games had a point back when they were mostly physical, because if you didn’t, you ran the risk of them running out. Although I didn’t pre-order GTA V, and just walked into a game store on the day of release and bought two copies, so since then I’ve rather been of the opinion that even with physical products, it’s probably not very likely they’re going to run out.
But now everything’s digital there’s 100% no reason to pre-order. Make them make the actual product they claimed to have made.
What’s the Enthusiasm About?
Hello there! Are you familiar with Go High Level? If not, let's break it down. Imagine managing multiple apps for marketing. Sounds busy. That's where Go High Level comes in. It's like that friend who has a solution for everything. Whether you are working on creating an outstanding sales funnel or sending out an email campaign, this platform has your back. The best part? They let you test drive everything with a 14-day free trial. It's like trying out a new car but for your business. And if you ever need assistance, their support team is just a click away. Pretty exciting, huh?
So, How Much Does It Cost?
Alright, let's talk about the financial side. Go High Level offers 3 main plans. The Agency Starter Plan is perfect if you're just starting or have a small business. It's loaded with all the essential tools, and it's quite cost-effective. But if you're looking to scale up, the Agency Unlimited Plan is your best choice. It's like the VIP pass at a concert, giving you access to everything without any limitations. Not sure about committing? Remember that 14-day free trial I mentioned? It's a great way to give it a try without any obligations. Lastly, they offer a Pro plan that includes "SaaS mode," where you can white label the product under your brand. Fantastic!
Why Everyone's Raving About It:
In a world full of sophisticated digital tools, Go High Level is like that all-in-one Swiss Army knife. There's no need to switch between apps because it has everything under one roof. Whether you're a newcomer or an experienced marketer, it's super user-friendly. It's not just about launching impressive campaigns; they ensure you understand their performance with top-notch analytics. There's even a white-label feature for marketing agencies, so you can add your brand and impress your clients.
DLSS3 and FSR2 do completely different things. DLSS2 is miles ahead of FSR2 in the upscaling space.
AMD currently doesn’t have anything that can even be compared to DLSS3. Not until FSR3 releases (next quarter, apparently?) and we can compare AMD’s framegen solution to Nvidia’s.
AMD has features in yesteryears that it had before Nvidia, its just less people paid attention to them till it became a hot topic after nvidia implemented it.
An example was anti lag, which AMD and Intel implemented before Nvidia
But people didnt care about it till ULL mode turned into Reflex.
AMD still holds onto Radeon Chill. Which basically keeps the gpu running slower when idling in game when not a lot is happening on the screen…the end result is lower power consumption when AFK, as well as reletivelly lower fan speeds/better acoustics because the gpu doesnt constantly work as hard.
I’m not saying reflex is bad and not used by esports pros. Its just the use of theoretical is not the best choice of word for the situation, as it does make a change, its just much harder to detect, similar to the difference between similar but not the same framerate on latency, or the experience of having refresh rates that are close to each other, especially on the high end as you stop getting into the realm of framerate input properties, but become bottlenecked by acreen characteristics (why oleds are better than traditional ips, but can be beat by high refresh rate ips/tn with BFI)
Regardless, the point is less on the tech, but the idea that AMD doesnt innovate. It does, but it takes longer for people to see t because they either choose not to use a specific feature, or are completely unaware of it, either because they dont use AMD, or they have a fixed channel on where they get their news.
Lets not forget over a decade ago, AMDs mantle was what brought Vulkan/DX12 performance to pc.
Because AMD gpu division is a much smaller division in an overall larger company. They physically cant push out as much features because of that. When they decide to make a drastic change to its hardware, its rarely seen till its considered old news. Take for example maxwell and pascal. You dont see a performance loss at the start because games would be designed for hardware at the time, in particular whatevers the most popular.
Maxwell and Pascal had a notible trait allowing it to have lower power consumption, the lack of a hardware scheduler as Nvidia moved the scheduler onto the driver. This allowed Nvidia to manually have more control of the gpu pipeline allowing for their gpus to handle smaller pipelines better, compared to AMD which had a hardware based one with multuple pipelines that needed an application to use properly to maximize its performance. It led to Maxwell/Pascal cards to have better performance… Til it didnt, as devs started to thead games better, and what used to be a good change for power consumption evolved into a cpu overhead problem (something Nvidia still has to this day reletive to AMS). AMDs innovations tend to be more on the hardware side of things which is pretty hard to market because of it.
It was like AMDs marketing for Smart Access Memory (again a feature AMD got to first, and till this day, works slightly better on AMD systems than other ones). It was a feature that was hard to market because there isnt much of a wow factor to them, but is an innovation.
Which then comes with the question of price/perf. Its not that its a bad idea that DLSS is better than FSR, but when you factor in price, some price tiers start to get funny, especially in the low end.
For the LONGEST time, the RX 6600, which by default, was about 15% faster than the 3050, amd was significantly cheaper, still was outsold by the 3050. Using DLSS to cover the performance of another GPU does natively (meaning objectively better, no artifacts, no added latency) is when that argument of never buying a gpu without DLSS becomes weak, as the issue for some price brackets is what you could get at the same price or similar might be significantly better.
In terms of modern gpus, the 4060ti is the one card everyone for the most part, should avoid (unless your a business china that needs gpus for AI due to the U.S government limiting chip sales)
Sort of the same idea im RT performance too. Some people make it like AMD cant RT at all. Usually their performamce is a gen behind, so in situations like the 7900 xtx vs the 4080, could swing towards the 4080 for value, butnfor situations like the 7900xt, which was at some point, being sold for 700$, ots value, RT included was significantly better than the 4070ti as an overall package.
Which is what.im.sayong, the condition of course that the gpus are priced close enough (e.g 4060 vs 7600). But when theres a deficiency in a cards spec (e.g 8gb gpus) or a large discrepancy in price, it would favor the AMD usually .
Its why the 3050 was a terribly priced gpu for the longest time, and currently, the 4060ti is the butt of the joke, and someone shouldnt use those over the AMD in the said price range due to both performamce, and hardware deficiency(vram in the case of the cheaper 4060ti)
In the case of the 4060ti 8gb, turning on RT puts them past the 8gb threshold killing performance, hence hardware deficiency does matter in some cases.
yeah if you’re severely gpu bottlenecked the difference is IMMEDIATELY OBVIOUS, especially in menus with custom cursors. (mouse smoothness while navigating menus is night and day difference), in-game it’s barely noticeable until you start dropping to ~30fps, then again: a huge difference.
I’m not sure, been trying to find the answer. But FSR3 they’ve stated will continue to be open source and prior versions have supported Vulkan on the developer end. It sounds like this is a solution for using it in games that didn’t necessarily integrate it though? So it might be separate. Unclear.
youtube.com
Aktywne