There is also dwarf fortress, cogmind (which says it’s early access, but in reality the dev just loves his game) and many others. Some devs are just really awesome.
It’s kinda insane what No Man’s Sky is doing, too. Multiple free updates a year. They recently pushed an update to design and build your own corvette-class spaceships, and broke all their records for players ever.
NMS isn’t even a live service game! Sean and his team just do that free, and keep putting the game on huge sales at the same time. WTF Sean
Oh yeah each times Sean does this shit, forums and chats are filled with people mock-angrily ranting that now they need to find some more friends to buy it for lol. I’ve bought it for several friends myself. And some people buy it for several different platforms
And that meme of the dad with the belt? Every update, there’s a version on the reddit sub of Sean’s avatar “ITS FREE UPDATE TIME” (sometimes they put the update img on the belt), and the cowering kid being like “SEAN PLS NO! I have money”
Warframe is also the only f2p model game I’ve ever actually felt fairly treated in, they’ve even changed systems when they noticed they were ‘too’ profitable and didn’t feel comfortable with how that was affecting the players paying for them. Warframe is a live service game by design, but it also has a community of players happy to pay for it because they love it and want to show that to devs, rather than being strong-armed.
Players - or rather, people - really are willing to be a contributing part of the things and communities we love. We WANT the things we care about to succeed. Fear/control is easier, simpler; but love is so much more powerful.
All I wish for is Clint to get an online girlfriend, my wife to be able to hang out with her friends again, and the cute Joja cashier to move into town
From what I saw of this whole debacle was that the CEO himself uses GenAI for mocking up things for presentations he gives to the rest of the staff, and not something they do in general at the studio.
We’ve had tools to manage workflows for decades. You don’t need Copilot injected into every corner of your interface to achieve this. I suspect the bigger challenge for Larian is working in a development suite that can’t be accused of having “AI Assist” hiding somewhere in the internals.
Yup! Certifying a workflow as AI-free would be a monumental task now. First, you’d have to designate exactly what kinds of AI you mean, which is a harder task than I think people realize. Then, you’d have to identify every instance of that kind of AI in every tool you might use. And just looking at Adobe, there’s a lot. Then you, what, forbid your team from using them, sure, but how do you monitor that? Ya can’t uninstall generative fill from Photoshop. Anyway, that’s why anything with a complicated design process marked “AI-Free” is going to be the equivalent of greenwashing, at least for a while. But they should be able to prevent obvious slop from being in the final product just in regular testing.
Coincidentally, this paper published yesterday indicates that LLMs are worse at coding the closer you get to the low level like assembly or binary. Or more precisely, ya stop seeing improvements pretty early on in scaling up the models. If I’m reading it right, which I’m probably not.
Yeah, do you use any Microsoft products at all (like 98% of corporate software development does)? Everything from teams to word to visual studio has copilot sitting there. It would just take one employee asking it a question to render a no-AI pledge a lie.
You know it doesn’t have to be all or nothing, right?
In the early design phase, for example, quick placeholder objects are invaluable for composing a scene. Say you want a dozen different effigies built from wood and straw – you let the clanker churn them out. If you like them, an environment artist can replace them with bespoke models, as detailed and as optimized as the scene needs it. If you don’t like them, you can just chuck them in the trash and you won’t have wasted the work of an artist, who can work on artwork that will actually appear in the released product.
Larian haven’t done anything to make me question their credibility in this matter.
You know it doesn’t have to be all or nothing, right?
Part of the “magic” of AI is how much of the design process gets hijacked by inference. At some scale you simply don’t have control of your own product anymore. What is normally a process of building up an asset by layers becomes flattened blobs you need to meticulously deconstruct and reconstruct if you want them to not look like total shit.
That’s a big part of the reason why “AI slop” looks so bad. Inference is fundamentally not how people create complex and delicate art pieces. It’s like constructing a house by starting with the paint job and ending with the framing lumber, then asking an architect to fix where you fucked up.
If you don’t like them, you can just chuck them in the trash and you won’t have wasted the work of an artist
If you engineer your art department to start with verbal prompts rather than sketches and rough drawings, you’re handcuffing yourself to the heuristics of your AI dataset. It doesn’t matter that you can throw away what you don’t like. It matters that you’re preemptively limiting yourself to what you’ll eventually approve.
That’s a big part of the reason why “AI slop” looks so bad. Inference is fundamentally not how people create complex and delicate art pieces. It’s like constructing a house by starting with the paint job and ending with the framing lumber, then asking an architect to fix where you fucked up.
This is just the whole robot sandwich thing to me.
A tool is a tool. Fools may not use them well, but someone who understands how to properly use a tool can get great things out of it.
Doesn’t anybody remember how internet search was in the early days? How you had to craft very specific searches to get something you actually wanted? To me this is like that. I use generative AI as a search engine and just like with altavista or google, it’s up to my own evaluation of the results and my own acumen with the prompt to get me where I want to be. Even then, I still need to pay attention and make sure what I have is relevant and useful.
I think artists could use gen AI to make more good art than ever, but just like a photographer… a thousand shots only results in a very small number of truly amazing outcomes.
Gen AI can’t think for itself or for anybody, and if you let it do the thinking and end up with slop well… garbage in, garbage out.
At the end of the day right now two people can use the same tools and ask for the same things and get wildly different outputs. It doesn’t have to be garbage unless you let it be though.
I will say, gen AI seems to be the only way to combat the insane BEC attacks we have today. I can’t babysit every single user’s every email, but it sure as hell can bring me a shortlist of things to look at. Something might get through, but before I had a tool a ton of shit got through, and we almost paid tens of thousands of dollars in a single bogus but convincing looking invoice. It went so far as a fucking bank account penny test (they verified two ach deposits) Four different people gave their approvals - head of accounting included… before a junior person asked us if we saw anything fishy. This is just one example for why gen AI can have real practical use cases.
This is just the whole robot sandwich thing to me.
If home kitchens were being replaced by pre-filled Automats, I’d be equally repulsed.
A tool is a tool. Fools may not use them well, but someone who understands how to properly use a tool can get great things out of it.
The most expert craftsman won’t get a round peg to fit into a square hole without doing some damage. At some point, you need to understand what the tool is useful for. And the danger of LLMs boils down to the seeming industrial scale willingness to sacrifice quality for expediency and defend the choice in the name of business profit.
Doesn’t anybody remember how internet search was in the early days? How you had to craft very specific searches to get something you actually wanted?
Internet search was as much constrained by what was online as what you entered in the prompt. You might ask for a horse and get a hundred different Palominos when you wanted a Clydesdale, not realizing the need to be specific. But you’re never going to find a picture of a Vermont Morgan horse if nobody bothered to snap a photo and host it where a crawler could find it.
Taken to the next level with LLMs, you’re never going to infer a Vermont Morgan if it isn’t in the training data. You’re never going to even think to look for one, if the LLM hasn’t bothered to index it properly. And because these AI engines are constantly eating their own tails, what you get is a basket of horses that are inferred between a Palomino and a Clydesdale, sucked back into training data, and inferred in between a Palomino and a Palomino-Clydesdale, and sucked back into the training data, and, and, and…
I think artists could use gen AI to make more good art than ever
I don’t think using an increasingly elaborate and sophisticated crutch will teach you to sprint faster than Hussein Bolt. Removing steps in the artistic process and relying on glorified Clipart Catalogs will not improve your output. It will speed up your output and meet some minimum viable standard for release. But the goal of that process is to remove human involvement, not improve human involvement.
I will say, gen AI seems to be the only way to combat the insane BEC attacks we have today.
Which is great. Love to use algorithmic defenses to combat algorithmic attacks.
But that’s a completely different problem than using inference to generate art assets.
I get the knee jerk reaction because everything has been so horrible everywhere lately with AI, but they’re actually one of the few companies using it right.
There are AI’s that are ethically trained. There are AI’s that run on local hardware. We’ll eventually need AI ratings to distinguish use types, I suppose.
It’s even more complicated than that: “AI” is not even a well-defined term. Back when Quake 3 was still in beta (“the demo”), id Software held a competition to develop “bot AIs” that could be added to a server so players would have something to play against while they waited for more people to join (or you could have players VS bots style matches).
That was over 25 years ago. What kind of “AI” do you think was used back then? 🤣
The AI hater extremists seem to be in two camps:
Data center haters
AI-is-killing-jobs
The data center haters are the strangest, to me. Because there’s this default assumption that data centers can never be powered by renewable energy and that AI will never improve to the point where it can all be run locally on people’s PCs (and other, personal hardware).
Yet every day there’s news suggesting that local AI is performing better and better. It seems inevitable—to me—that “big AI” will go the same route as mainframes.
colloquially today most people mean genAI like LLMs when they say “AI” for brevity.
Because there’s this default assumption that data centers can never be powered by renewable energy
that’s not the point at all. the point is, even before AI, our increasing energy needs were outpacing our ability/willingness to switch to green energy. Even then we were using more fossil fuels than at any point in the history of the world. Now AI is just adding a whole other layer of energy demand on top of that.
sure, maybe, eventually, we will power everything with green energy, but… we aren’t actually doing that, and we don’t have time to catch up. every bit longer it takes us to eliminate fossil fuels will add to negative effects on our climate and ecosystems.
The power use from AI is orthogonal to renewable energy. From the news, you’d think that AI data centers have become the number one cause of global warming. Yet, they’re not even in the top 100. Even at the current pace of data center buildouts, they won’t make the top 100… ever.
AI data center power utilization is a regional problem specific to certain localities. It’s a bad idea to build such a data center in certain places but companies do it anyway (for economic reasons that are easy to fix with regulation). It’s not a universal problem across the globe.
Aside: I’d like to point out that the fusion reactor designs currently being built and tested were created using AI. Much of the advancements in that area are thanks to “AI data centers”. If fusion power becomes a reality in the next 50 years it’ll have more than made up for any emissions from data centers. From all of them, ever.
Power source is only one impact. Water for cooling is even bigger. There are data centers pumping out huge amounts of heat in places like AZ, TX, CA where water is scarce and temps are high.
Is the water “consumed” when used for this purpose? I don’t know how data centers do it but it wouldn’t seem that it would need to be constantly drawing water from a local system. They could even source it from elsewhere if necessary.
Closed loop systems are expensive. A lot of them are literally spraying water directly on to heat exchangers. And they often pull directly from city drinking water. As some Texas towns have been asked to reduce water consumption so the data center doesn’t run out
Data centers typically use closed loop cooling systems but those do still lose a bit of water each day that needs to be replaced. It’s not much—compared to the size of the data center—but it’s still a non-trivial amount.
A study recently came out (it was talked about extensively on the Science VS podcast) that said that a long conversation with an AI chat bot (e.g. ChatGPT) could use up to half a liter of water—in the worst case scenario.
This statistic has been used in the news quite a lot recently but it’s a bad statistic: That water usage counts the water used by the power plant (for its own cooling). That’s typically water that would come from ponds and similar that would’ve been built right alongside the power plant (your classic “cooling pond”). So it’s not like the data centers are using 0.5L of fresh water that could be going to people’s homes.
For reference, the actual data center water usage is 12% of that 0.5L: 0.06L of water (for a long chat). Also remember: This is the worst-case scenario with a very poorly-engineered data center.
Another stat from the study that’s relevant: Generating images uses much less energy/water than chat. However, generating videos uses up an order of magnitude more than both (combined).
So if you want the lowest possible energy usage of modern, generative AI: Use fast (low parameter count), open source models… To generate images 👍
Some use up the water through evaporation, so they constantly draw water. Some “consume” the water, meaning they have a closed system of cooling water, but that uses a lot more electricity than evaporative cooling, which also uses water to generate.
Sure. My company has a database of all technical papers written by employees in the last 30-ish years. Nearly all of these contain proprietary information from other companies (we deal with tons of other companies and have access to their data), so we can’t build a public LLM nor use a public LLM. So we created an internal-only LLM that is only trained on our data.
You are solely using your own data or rather you are refining an existing LLM or rather RAG?
I’m not an expert but AFAIK training an LLM requires, by definition, a vast mount of text so I’m skeptical that ANY company publish enough papers to do so. I understand if you can’t share more about the process. Maybe me saying “AI” was too broad.
I’d bet my lunch this internal LLM is a trained open weight model, which has lots of public data in it. Not complaining about what your company has done, as I think that makes sense, just providing a counterpoint.
Right, and to be clear I’m not saying it’s not possible (if fact I some models in mind but I’d rather let others share first). This isn’t a trick question, it’s a genuine request to hopefully be able to rely on such tools.
The Firefly image generator is a diffusion model, and the Firefly video generator is a diffusion transformer. LLMs aren’t involved in either process - rather the models learn image-text relationships from meta tags. I believe there are some ChatGPT integrations with Reader and Acrobat, but that’s unrelated to Firefly.
As I understand it, CLIP (and other text encoders in diffusion models) aren’t trained like LLMs, exactly. They’re trained on image/text pairing, which ya get from the metadata creators upload with their photos in Adobe Stock. Open AI trained CLIP with alt text on scraped images, but I assume Adobe would want to train their own text encoder on the more extensive tags on the stock images its already using.
All that said, Adobe hasn’t published their entire architecture. And there were some reports during the training of Firefly 1 back in ’22 that they weren’t filtering out AI-generated images in the training set. At the time, those made up ~5% of the full stock library. Currently, AI images make up about half of Adobe Stock, though filtering them out seems to work well. We don’t know if they were included in later versions of Firefly. There’s an incentive for Adobe to filter them out, since AI trained on AI tends to lose its tails (the ability to handle edge cases well), and that would be pretty devastating for something like generative fill.
I figure we want to encourage companies to do better, whatever that looks like. For a monopolistic giant like Adobe, they seem to have at least done better. And at some point, they have to rely on the artists uploading stock photos to be honest. Not just about AI, but about release forms, photo shoot working conditions, local laws being followed while shooting, etc. They do have some incentive to be honest, since Adobe pays them, but I don’t doubt there are issues there too.
Apertus was developed with due consideration to Swiss data protection laws, Swiss copyright laws, and the transparency obligations under the EU AI Act. Particular attention has been paid to data integrity and ethical standards: the training corpus builds only on data which is publicly available. It is filtered to respect machine-readable opt-out requests from websites, even retroactively, and to remove personal data, and other undesired content before training begins.
Thanks, a friend recommended it few days ago indeed but unfortunately AFAICT they don’t provide the CO2eq in their model card nor an analogy equivalence non technical users could understand.
The cat’s out of the bag. Focus your energy on stopping fascist oligarchs then regulating AI to be as green and democratic as possible. Or sit back and avoid it out of ethical concerns as the fascists use it to target and eliminate you.
The number of people who think that saying that the cat’s out of the bag is somehow redeeming is completely bizarre. Would you say this about slavery too in the 1800s? Just because people are doing it doesn’t mean it’s morally or ethically right to do it, nor that we should put up with it.
The “cat” does not refer to unethical training of models. Tell me, if we somehow managed to delete every single unethically trained model in existence AND miraculously prevent another one from being ever made (ignoring the part where the AI bubble pops) what would happen? Do you think everyone would go “welp, no more AI I guess.” NO! People would immediately get to work making an “ethically trained” model (according to some regulatory definition of “ethical”), and by “people” I don’t mean just anyone, I mean the people who can afford to gather or license the most exclusive training data: the wealthy.
“Cat’s out of the bag” means the knowledge of what’s possible is out there and everyone knows it. The only thing you could gain by trying to put it “back in the bag” is to help the ultra wealthy capitalize on it.
So, much like with slavery and animal testing and nuclear weapons, what we should do instead is recognize that we live in a reality where the cat is out of the bag, and try to prevent harm caused by it going forward.
No one [intelligent] is using an LLm for workflow organization. Despite what the media will try to convince you, Not every AI is an LLM or even and LLM trained on all the copyrighted shit you can find in the Internet.
Good, maybe now prices for them will finally come back down to reality. $500 for a switch is bonkers and $800 for an Xbox of any variety is outright criminal.
They’re too greedy to let things go at cost now, they know parents and fans will get it anyway. Look at parking alone for disney world, like $175.
Greed has ruined companies. Nintendo won’t sell bubble bobble for NES, I have to find a used cartridge or do without. They don’t sell nor support it. So I use a rom and they cry about that. They don’t get it both ways, fuck Nintendo. I’ll never stop seeding/sharing my massive rom collection, switch games included.
The funny thing about Disney, and believe me, I don’t want to defend them here, is that they’ve found ways to admit and fit far more people into the park as demand rose for a thing that inherently has fixed supply. More or less the same thing is happening with GPUs and memory right now as AI demand is sucking up so much supply and they can’t be produced any faster. The supply can’t increase, so the prices go up. They have to. Nintendo and Sony both know they stand to make more money after the fact if they sell you a cheaper console, but they can’t lose $200 per unit either, or there’s very little chance they make a profit on it ever.
Unless you have insider information/proof, this is just speculation and I don’t believe it based on their past greedy little money grabs. They’re going to milk it until everyone has one. There are people who have multiple switch 1s, different switches for certain games, colors, modded ones, customized ones. The list goes on. The goal is to extract money from your account to theirs any way possible.
I mean, $7/month just to backup your games. Why is there no local way to copy a fucking file?
As far as Disney, don’t care. Unless there is armed security guarding your shit, $175 is just extortion.
Insider information of what? Increases for memory and GPUs are covered by basically every news outlet that covers tech right now. That $7 to copy a file is exactly why they’d want to get a Switch into your home for cheaper if they could. You get multiple Switches in the same home by making it accessibly priced, and Nintendo knows that. Walled gardens suck, and that’s where their money is made. Right now, Nintendo is absorbing some extra costs above and beyond where they expected (like tariffs) by charging more for accessories rather than raising the price of the base unit, but there’s a good chance they lose their stomach for keeping the Switch 2 price where it is as we run into more of these supply issues.
As far as Disney, don’t care. Unless there is armed security guarding your shit, $175 is just extortion.
Even ignoring the fact that cars and parking them is about the least efficient use of land you can have, the only alternative is that they keep prices low but then there’s a lottery or a waiting list to get in. No secret as to why prices just went up.
We all want things to cost less, but as adults, there are realities to acknowledge as to why they can’t. I never said they’re not making profit on their hardware, but their actions (announcing prices and then raising them above their competitors after tariffs were revealed) indicate that their margins are probably not very high; marketing one price and then changing it is a tangible expense that companies don’t like to do if they don’t have to. This report that we’re commenting on shows that even being the only new console this year is not enough to make it a hot holiday item.
Consoles are normally sold at a loss if I remember correctly. Companies tend to bet on people buying enough games to make up the difference, because generally people do. Most people aren’t sitting on a console with only 3 or 4 games for it.
Switch 2 has a much healthier margin than Switch. Nintendo is actually making money on the hardware this time. They don’t have the lineup or the services to justify the hardware being a loss leader and won’t until probably 2027.
Here’s to hoping the Steam Machine is $799 or less.
The principal of supply and demand still applies, they will cut prices up until the point they either go out of business or they find a sufficient number of buyers.
Companies like Nvidia, Micron and Samsung are currently chasing massive profits off enterprise customers, but will come crawling back to consumers once the AI bubble bursts (assuming they survive the resulting market collapse).
As an example, if Nvidia can turn one TSMC wafter into one AI accelerator that they can sell for $40K, or into ~5 RTX 5090s they can sell for $2K/ea - they will sell as many of the $40K cards as they can, and only use failed wafers to try and satiate RTX 5090 demand.
But if there are no more AI customers, they will be forced to drop prices in order to shift more volume. If they can’t drop prices further due to wafer costs, then they will pass up wafer allocations from TSMC.
If TSMC sees too many wafers free up - they will be forced to drop prices to all customers (AMD, Apple etc.) to try and pick up the slack. They in turn will need to drop prices in order to try and increase sales volumes.
This will have a downwards pressure on prices and a “return to the mean” moment for tech prices. It will just be a painful couple of years until we get to that point, and honestly with the way things are currently going - it will be the least of our worries.
You think prices will fall now that there are giant new capable data centers everywhere? AAA Gaming will become synonymous with cloud gaming and the hardware to run games at home won‘t be produced anymore. They’ll build even more data centers instead. It‘s a much more useful business model to establish tech feudalism for the overly rich.
Data centres aren’t run by hardware manufacturers. When Nvidia/Micron/Samsung run out of enterprise corporations to bilge funds out of, they will return back to selling to consumers.
Does this mean that things will 100% return to how they were in the ‘Before Times’? No, let’s be real - the surplus of under-used data centres will definitely result in a push towards cloud gaming, online experiences and the like - but in an ideal scenario we would end up with more choice and not less.
But again, this all hinges on the current AI bubble bursting in the near future - followed by a pretty bad recession/depression.
I don’t think its necessarily the prices that are the issue but what you’re getting for it. Games have historically not kept up with inflation and still cost less than what we were paying for SNES carts in the 90s, but now they’re the 15th sequel of some franchise and are only half finished so there isn’t much draw for customers.
Not sure why anyone would buy games at this point.
There are so many games available for free, and any new ones kinda suck donkey balls. Couple that with a shitty economy, grocery prices are through the roof, and it’s no surprise people aren’t wasting money on video games as much.
Nah. I’m sure SD had a similar development time, we’re just aware of it at this point, so it feels “delayed”.
It’ll be done when it’s done. Declaring it a lost cause just because the solo Dev hasn’t whipped it up in a year or two is misguided, to say the least.
There’s nothing to complain about here. Games require tons of placeholders, in art, dialogue, and code. They will iterate dozens of times before the final product, and given Larian’s own production standards, there’s no chance anything but the most inconsequential or forgotten items made by an LLM will stay in.
Among the devs responding is a former Larian staffer, environment artist Selena Tobin. “consider my feedback: i loved working at @larianstudios.com until AI,” Tobin writes. “reconsider and change your direction, like, yesterday. show your employees some respect. they are world-class & do not need AI assistance to come up with amazing ideas.”
there’s no chance anything but the most inconsequential or forgotten items made by an LLM will stay in.
Concept art is not a placeholder. It’s part of the creative process. By using AI to generate text and images you already influenced the creative process negatively.
Idk but that seems pretty obvious to me from reading the quote by Larian CEO Swen Vincke that they used to, or are still using it to generate or “enhance” concept art and that’s it’s a highly discussed topic within the company?
What he actually said was that they use it in the “ideation” phase of concept art. Basically: throwing shit on the wall and seeing what sticks. After that, the process is taken over by any of their almost 30 concept artists on payroll.
A few hours ago, reports surfaced that Larian are making use of generative AI during development of their new RPG Divinity - specifically, to come up with ideas, produce placeholder text, develop concept art, and create materials for PowerPoint presentations.
The actual quote from Swen is that they use it in the “ideation” phase of concept art. Basically: throwing shit on the wall and seeing what sticks. After that, the process is taken over by any of their almost 30 concept artists on payroll.
It’s being used in the pre-concept art phase, which is when you grab literally anything to do a very generic “here’s my idea” demonstration. It can be a screenshot from a game you recently played, a cinema poster, a photo of your cat, something you found on DeviantArt or Pinterest, a doodle you made with a pencil, cloth fragment, an interesting rock you found on a stroll, simple render, anything.
But, getting all these takes a lot of time - you have something in your head and now you need to find an image or an item that will more or less represent it. So you spend hours on Google Images trying to refine your search, only so that you can then post it on the ideas board, and for it to be replaced completely by actual concept art.
This is where they’re utilising GenAI. And they’re not even replacing this process entirely, they’re using GenAI on top of everything else - basically, using all the tools available to speed up the process.
And, yes, sure, “it at least informs the concept art”, but that’s kind of the point of that entire phase of development. Concept art doesn’t grow in a void, the designers of the game are not the concept artists.
A lot of the industry artists are at the very least using AI to screw around with concept art for references. The kind of stuff where they used to use google to search. One of my friends fed a service a fairly raw hand-sketched drawing, told it how to finish it off, then asked it to put it in different poses at different angles, then used that to hand-make the character into 3D.
There are, of course, many artists who wouldn’t touch any of it with a 10-foot pole.
Oh nooooo what will we do as a society If pewter mug number two worth 0 gold gets left in by accident? It’s literally the worst thing that could possibly happen in 2026. Nothing else could exceed the horror of a 50x50 pixel icon for worthless junk that was generated by a computer rather than a person. 😱😱😱😱😱😱😱😱
But what if they miss something and do X for something unimportant
Well it doesn’t matter that much, it’s unimportant
But what if they then change their minds and do X…
I dunno man, I think you’re worried too much about a studio that actively said they want to do right by people rather than the folks out there actively causing harm
It’s not like they are just going to do a visual spot check on each level to clear out the AI assets. They will probably tag it in the meta data as a placeholder. So some automated validation process can find every ai asset in a level. Not to mention game objects are wrapped in an object template. And then the template is used to place the object into the scenes. So they only have to replace the placeholder with the final object once in the template and then it will replace it everywhere the template is used.
For physical software, it’s super hard to buy it if stores aren’t stocking it.
The Xbox section doesn’t really exist at Target, Walmart, or Costco anymore, and it’s on the way out at Best Buy. Naturally that’s going to have an impact on sales.
Further, Microsoft doesn’t seem interested in physical sales anymore. I probably would have bought Avowed if it existed in meat-space, it doesn’t. I had a really hard time sourcing Indiana Jones and Outer Worlds 2.
On the hardware side, I already have this generations worth of hardware (PS5, XSX, Steam Deck), and I’m not interested in all the baggage on the Switch 2.
Plus, hardware prices are up.
So the surprise would be if sales hadn’t gone down.
and of course, this will be misconstrued. The executives will shout “look! people don’t want physical ownership!” and the push to digital rentals will continue… and result in even higher prices when they pull a Netflix.
This is misplacing cause and effect. The shift to digital has been happening for years now. They cut physical production because fewer people were buying it.
Execs genuinely couldn‘t care less about what people want. They are the architects of this trend away from physical media.
I’m making the prediction that any hardware that isn‘t essentially just a screen that connects to the internet will become more and more expensive to the point no one can afford them. Major brands that we all know and use today will withdraw from manufacturing end consumer products.
I‘m guessing 10 years from now virtually everyone will be forced into cloud service subscriptions for gaming because the hardware to run these games won‘t be sold to us anymore. For a while Chinese companies might try fill the void the likes of Nvidia and AMD left but that will be short lived too.
You will go retro and learn to take care of your soon old timer hardware that will become ever more pricey to fix as spare parts get more rare and ridiculously expensive expensive or you will own nothing and be happy with that.
Yes this is all speculative but it‘s a vision of the future that becomes more and more obvious to me by the day.
This is basically why I’m giving up collecting physical media. I have several hundred games on disc/cartridge, and consoles from most generations, but it’s really hard to find newer games on physical media these days. Most of the good ones can’t be found used, and good luck finding a new copy anywhere.
And of course all the older physical media is also getting harder to get because people are paying a lot for it now.. like I have some games in the $80-500 range that I paid very little for years ago. I know the used sales probably don’t count to this article, but you can just look at them to see what’s going on with new physical sales. They made whole consoles that don’t have disc drives, so people couldn’t buy used and bypass them making profit, ffs. Of course the physical game market is crashing. They did that on purpose.
PS5 era is the last hurrah for physical media for me, and I honestly barely even play on PS5 because there’s just nothing to get. I’ve managed to get like a dozen discs for it, and that was difficult. Meanwhile I have easily 4x that for ps4, and prior generations are even better represented. I’d like to get the current Xbox since it’s mostly backward compatible with the one before it IIUC and I have a 360, but I just have no motivation to do so.
Further, Microsoft doesn’t seem interested in physical sales anymore. I probably would have bought Avowed if it existed in meat-space, it doesn’t. I had a really hard time sourcing Indiana Jones and Outer Worlds 2.
If the disc version exists, can’t you buy it online?
And aren’t console discs de facto installer stubs?
Just curious, I play on PC where physical discs haven’t been a thing for a long time.
The problem is if nobody sells affordable hardware or hardware at all anymore, the only path they can go is cloud gaming. That means from here on onward ownership is dead.
Physical versions only have value of they are complete and relatively bug free, and originally purposed to avoid big downloads.
Nowadays day 1 patching may be the same size as the install or larger negating half the point. The other half is lost because almost everything is a subscription, multi-player, or delivered with too many bugs as a beta test.
Collecting physical copies is a thing, but is niche.
ign.com
Aktywne