I’d only play it if the game wasn’t P2W despite one having to pay for it. If P2W is removed, I’d consider it. Otherwise, don’t bet on me playing it anytime soon.
From the No Pay To Win Coalition’s review on Steam:
“Overall: 2/5*
Not recommended!
Likely to become a P2W, due to it being a GaaS game.
Bad progression system, rigged against newbies.
Other issues:
Cheaters
GaaS
A low-poly Battlefield copy. ”
If it isn’t P2W as of right now, it will be that way. This also has to do with old-school ownership, as you’re paying for a license that will be revoked any time the devs feel like it.
I don’t understand why there is such a drama around using AI while developing. Everyone I know does that. Of course it doesn’t mean that I copy-paste the code from ChatGPT. But at the same time it is often useful and faster than Googling on Stack-Overflow… Someone even said there that developers using AI will have legal problems in future… I really don’t think so, because it is the AI provider responsibility to train it on data provided freely for this purpose. I’m against stealing data and brainslessly using AI, but at the same time the AI is here and you can’t ignore it.
The drama here isn’t solely over the use of AI. In fact, your last comment any brainlessly using AI is closer to the cause of this drama. The project lead is pushing untested code straight to main, and the fact it was AI generated was an addendum/insult to injury.
Other point the maintainers raised is the possibility that GenAI code violates the GPL license.
It’s a good concern to have, and one I feel people don’t talk about enough.
A little lesson about technical projects: You will quickly reach 95% completion and have something amazing to show off. Then, 95% of the work is completing that last 5% in order to make the prototype usable.
AI is good at making itself look ready. It is nowhere near ready.
I meant using AI for example for refactoring purposes or giving it some small tasks or questions. Not building whole projects with AI, because it is unmaintainable.
I would like to think some of it is class consciousness. “AI” inherently means fewer jobs. But considering the world we live in, I sincerely doubt the vast majority of people speaking out against it give a shit because “I am just demonstrating my value so the leopards won’t eat my face” and so forth.
Most of it is performative bullshit. Influencers who DID get fucked over (or decided this is why they didn’t get the job they wanted) have talked loud and proud about how evil it is and that is kind of where the conversation ends. “AI” is evil and nobody cares to think what “AI” actually means. I’ll never get tired of giving a depressed smirk when I see the same people who were championing the use of “magic” Adobe tools complaining about “AI” taking graphics design jobs and so forth. Same with the folk who have dozens of blog posts about using tools to generate their docstrings suddenly getting angry that “AI” does that too.
Personally? I very much DO care about the labor side. Not because I don’t think “AI” can do the job of an intern or even most early career staff (spend some time mentoring early career staff… I wish I just had to worry about six toes on a foot or code that would delete our prod tables if I don’t review it well enough). But because the only way for those dumbass kids to learn is by doing the tasks we would be getting rid of and that is already leading to a VERY rapid brain drain as it is increasingly hard to find staff who can actually do the job of a Senior role.
Which is the other issue… “AI” can do some stuff VERY well. Other stuff it is horrible at. And even more stuff that it “does well” is dictated by managers and “Prompt Engineers” and not actually the domain experts who can say “Yeah… this is good. THAT is complete dog shit”.
And then there is the IP theft part of things. People… are once again stupid and don’t realize that the folk posting answers on Stack Overflow were just as likely to have read that blog post where you talked about your cool algorithm. Or how much art is literally traced from others with no attribution and becomes part of major marketing campaigns. And while I think a MUCH bigger reckoning needs to happen regarding IP law and attribution… “AI” is just a symptom of the real problem.
I’m so sick of hearing about AI. I’ve been struggling with the concept that “authenticity” seems to be completely irrelevant to a lot of people lately. I’m a developer and have been for 10+ years at this point, and am struggling to understand why I would stay in this field.
When I get down, I start to question if there’s any point in learning anything. I started up a side project with somebody I respected in a niche hobby space and have been using that to learn a new framework, but I’m rapidly feeling like it’s pointless to bother learning when you can badly cobble something together with a chatbot and the managers of the world will ejaculate themselves dry about how good robots are, even when the bloody thing barely functions.
It seems that a lot of people don’t give a shit if something is actually made by humans. Those same people don’t seem to value the hard work it takes to make something. I feel like I’m having a hard time fitting in at the moment.
So far I’ve been impressed with what AI can do with coding. I had it write some scripts for me on one of my previous work tasks and it did the majority of the code writing and even majorly assisted the debug process.
And now I’m using it for another task and it’s already improved significantly since the last one. You can now interrupt it if if gets stuck in some kind of loop and the required debug phases are fewer. Hell, it’s even reading between the lines of my prompts effectively and implemented a verbosity feature in a second script just because I had requested it in the first one.
With the first task, I was holding its hand as far as data structures and such were concerned. This time, I’m instructing it at a higher level. And while it does help that I can understand the code it generates, I said last time that it was good enough to start replacing interns, I think at this point it’s ready to start replacing junior programming positions.
I struggle with similar feelings, although in other disciplines. I’m going to ask you the same questions I ask myself because they help center me when I’m in a similar mental space.
Who do you create for? Do you create for those people that don’t value your work? Do you create for yourself, for your own satisfaction? Do you create for external recognition?
I think we’ll turn this corner as a society, especially as everything becomes further enshittified. Inherent value and authenticity, the process and the work are all things that I do believe we all care about, but we’ve been spoiled with the convenience of everything.
I’m rapidly feeling like it’s pointless to bother learning when you can badly cobble something together with a chatbot and the managers of the world will ejaculate themselves dry about how good robots are, even when the bloody thing barely functions.
Rhetorical question: How much of your decade of development has been in a professional capacity
That has ALWAYS been true. A barely functioning Proof Of Concept has always been sexy. Someone has an idea, they make a barely functioning example of it working (often depending on stack overflow and asking others for help), show it to Management, and get money. With Management often thinking how they can either rapidly patent something in there or sell it off to a larger company.
Nothing there is new aside from “AI” replacing “ask Stack Overflow”.
And, just to be clear, that was also true in the hobbyist space. Think about how often you saw an article like “someone recreated PT in Unreal Engine!!!” (not to mention PT itself being the kind of project you give a new hire to learn the toolchain but…). Same with all those emulators that “added VR” and so forth. They are cool concepts that tend to not go anywhere or…
Once a POC becomes a Product? That is where knowledge matters. You no longer want the answer someone shat out while waiting for a belle claire video to download. You need to actually define your corner cases, improve performance, and build out a roadmap.
And… that ALSO isn’t about learning new tools and tech. A lot of that comes out of it, but that is where the difference between “computer programmer” and "software engineer’ comes into play. Because it becomes an engineering problem where you define and implement testing frameworks and build out the gitlab issues and so forth.
Like, a LOT of dumbfucks try to speedrun their way to management because it is more money. But the reality is that a good Engineer SHOULD become a manager as they “grow up”. Because you need people with technical ability to have a say in building out that roadmap and in allocating resources to different issues. Optimally you still get to code a lot (I am a huge fan of middle management in that regard) but… yeah.
I see what you’re saying, but I’m not talking about proof of concepts. I’m talking about “fully fledged” Frankenstein apps that get cobbled together by cowboys. Documentation written by ChatGPT that is full of hallucinations. Managers love that stuff because the thing they’ve asked for works but nothing outside of that one thing works, which doesn’t matter because they’re not testing it.
I’m not talking about small proof of concepts. I was referring to myself in a professional capacity as a developer; I’ve been a web developer full time since 2015.
Managers love that stuff because the thing they’ve asked for works but nothing outside of that one thing works, which doesn’t matter because they’re not testing it.
Yeah. That is a POC. It is what you use to get funding, have lawyers write up a patent, or shop around the company
That is not what I’m saying - the situation I’m describing is the situation I’m currently in: I work for a small web agency, we have the agency owner, the project manager, and the development lead as our “management”.
A client asks for something, the agency owner says yes, and then the development lead cobbles something together over the course of a few hours with results from ChatGPT or Claude.
The thing works, but only for that specific request and cannot handle edge cases, and he doesn’t know how it works nor how to extend it, so he cobbles on more ChatGPT or Claude results.
The management team love it, but it’s just mountains of technical debt piling up.
A client asks for something, the agency owner says yes, and then the development lead cobbles something together over the course of a few hours with results from ChatGPT or Claude.
Again… that is a POC.
The thing works, but only for that specific request and cannot handle edge cases, and he doesn’t know how it works nor how to extend it, so he cobbles on more ChatGPT or Claude results.
So… what you are saying is they make something specifically meeting the requirements given to them by the client with no intention of long term support? And that, in the event that you provide long term support, the skillset required drastically changes? Possibly to a more Software Engineering based one?
It’s not a proof of concept, or an MVP - I’m saying it’s what is given to the client as a full, finished solution.
What I’m saying is that the “this was built by AI!” effect is so strong that copy/pasting ChatGPT results together with no forethought or understanding is miserable and brings only technical debt - but that that’s irrelevant to management, because they’re impressed by the robot and don’t want to “fall behind” other agencies.
Jennifer’s body, American horror story, true blood, shooter, the Martian, demolition man, blade, first blood, commando, oldboyz, war games, Terminator 2, xeon: girl of the 21st century, the devil wears Prada, the witch, new dracula movie, gone in 60 seconds, underworld, ghost dog, lone wolf and cub, wensday, the sandman, maybe some others, if I have time later to think of them.
My answer to you is that I fully expect the ability to do stuff yourself to be seen as extremely valuable in about 2 or 3 years. This whole AI thing is pretty definitely a bubble, and on top of that it also looks to me like the blockchain and metaverse tech-fads, and I strongly believe that when the bubble bursts people that can say “I spent all that time learning to do things myself rather than rely on ai to do my thinking for me” are going to be the only people with careers.
At a quick glance the community cares more about him hiding commits and doing other nasty things instead of the actual AI stuff. I think that’s important to know before discussing the AI angle here.
Looking a bit further into it, a bug report on the GZDoom GitHub page titled “Project management” was opened that gave a little more detail noting issues with the lead of GZDoom pushing untested code, using an LLM to write code and hiding “not insignificant changes in commits, which has people worried that you’ll randomly rip out features that they rely on”. In reply, the project lead simply said “Feel free to fork the project under a” (yes, that really all they said).
A later comment before the bug report was locked points out the specific change in GZDoom that was made with ChatGPT.
Lately it feels like GenAI is gonna push FOSS projects and its maintainers to the breaking point. So much for this great future every AI vendor promised us 🙄
I think it will continue to rise. People are updating their rigs all the time. Whenever they update their rig they’ll have to ask themselves whether they want to continue with Windows on their new rig, or try with something new.
Most will stay on Windows of course, but some don’t. And those who switch to Linux are likely not returning to Windows (for gaming at least).
Yeah, for me personally, I’ve got one or two devices that see irregular use that are linux now, but my main rig is still windows and will continue to be so, since I have a number of friends on xbox that I can get more cross play for via gamepass But since I’m currently boycotting microsoft, and don’t know how much longer friends will stick with xbox given their general market decline, and given all the stability issues with win11 lately due to an increase of AI code usage, and all the everything… It might be a matter of time
I think it will continue to rise. People are updating their rigs all the time. Whenever they update their rig they’ll have to ask themselves whether they want to continue with Windows on their new rig, or try with something new.
The vast majority of this increase is from people playing on Steam Decks, which run on Linux, not from people switching to Linux on their PCs.
If it continues to rise, this is the reason. The general public is less and less into using a desktop at all as time goes on, much less running, and much less changing to, an extremely niche operating system on one.
EDIT: The previous sentence is actually more of the reason, upon further reflection. The total number of people playing on desktops period is falling, and the vast majority of desktops are Windows, so non-Windows OSes will comparatively gain ‘market share’ as that happens, even if their numbers don’t change at all.
Actually, the raw number percentage shows that the increase is due to Mint, Ubuntu, and Bazzite. Maybe people are installing Bazzite on their Deck but likely not the other two.
I want to say that I’ve been helping people get onto Mint and Bazzite. Going to pat myself on the back for contributing what little I can to grow this awesome community
I switched from Windows to Bazzite on my main rig 2 weeks ago. Likely won’t go back to Windows for gaming as I’ve had pretty much no issues with Bazzite.
I did also get a Steam deck recently, so anecdotally, both above answers are right.
The vast majority of this increase is from people playing on Steam Decks
I believe this is incorrect. The Steam survey break down GPUs by description and the Deck’s GPU appears in the results as “AMD Vangogh”, which only accounts for 0.39% of respondents. That implies that the vast majority of survey respondents using Linux are actually on PC, not the Deck.
That’s not true. You can see on Steam Hardware Survey what OS people are running, and SteamOS only makes up 27% of Linux users on Steam, so the vast majority are on regular PCs.
Certainly interesting to look at the fastest-growing distros: Ubuntu (the well-known, popular option), Bazzite (the gaming-marketed one), Freedesktop (someone else can answer this for me), and CachyOS (the side-gaming one? Not quite a gaming OS but very good at it)
The vast majority of the increase, is what I said. In other words, I’m saying it wouldn’t be nearly at the 3% mark without those users, and with over a quarter of all Linux users coming from the Steam Deck userbase, that is, in fact, true.
Without the Steam Deck there’d be 27% fewer Linux users. So while that would indeed mean Linux wouldn’t yet be 3% of the total Steam userbase, I think you will find that 27% is not the majority.
GamingOnLinux aggregates this data in a nicer way and as you can see there, the total Linux market share has gone from <1% five years ago to the 3% it is now. If that increase was mainly thanks to the Steam Deck, it would have to make up more like 75% of the Linux userbase rather than only 27%.
Instead, as others have pointed out, SteamOS’s share has actually gone down rather than up, which is a natural consequence of the Steam Deck being relatively old now so fewer are being sold.
Actually I wish that was true but the reality is still that unfortunately a lot of online multiplayer games do in fact not work without issues on Linux
I wonder if Valve will ever release an official desktop version of SteamOS? I think Linux adoption would really increase fast if there was a gaming focused Linux desktop distribution with the support of an established company. But does Valve want that? A full featured operating system is a lot to maintain and provide support for.
I think what could really drive adoption is if computers with Linux pre-installed was more easily accessible. Just boot the computer, choose which DE you want to install and then it’s done. It doesn’t need to be SteamOS. Just any good distro will do.
Who else has an incentive to do so other than Valve? Even when you buy a pre-built with Windows today, those things are subsidized by bloatware that’s already installed on the machine.
They don’t need to, just give them 3 screenshots and ask which they want. Show KDE, GNOME, and whatever the distro wants as the third. Maybe include some bullet points below each explaining what they are (pick one from the last two):
KDE - familiar, extensible
GNOME - modern, minimalist
Cinnamon/Budgie/MATE - something in the middle
XFCE/LXQT - super lightweight for older systems
Maybe select one by default that the OEM likes, but showing the option helps nudge them toward the idea that this is a flexible system.
Bazzite offers KDE or GNOME, and in the menu mentions KDE is what is used in SteamOS.
I installed Bazzite on my HTPC recently. It was the worst install process I’ve seen in over ten years of using Linux. I shall enumerate the problems I had:
The image is weirdly large, it’s like 9GB in size. It takes awhile to download and a weirdly long time to write to a USB stick.
Once written, you boot the image, and GRUB has the options to Install Bazzite or Test Media And Install Bazzite. By default, Test Media is selected. It always fails this test.
If you use the typical non-live environment image, the scaling is tiny on a 4k monitor, and there’s no way to adjust this.
If you use the live environment image (in beta at time of writing), it might just lock up. I had that happen twice just while clicking through the Anaconda installer.
The Anaconda installer, which I think they inherited from Fedora, was I think designed by one of the contrarian idiots who work for Gnome. There’s a DONE button up in the far upper left hand corner of the screen that sometimes acts as a back button, sometimes acts as a forward button. You have to move the mouse from the top corner of the screen to the center of the screen a lot, for no reason. The top-left corner of the screen is a dumb place to put a DONE button because most languages read top to bottom, left to right, the DONE button is where a START button should go.
There isn’t a simple way to tell it “put / on this drive, put /home on that drive.” There’s an automatic installer which will do god knows what…fail, most likely. There’s a “custom” partition dialog which I couldn’t make heads or tails of, and then there’s a “custom advanced” one that lets you set the size and position of each partition to the byte. Doing it this way apparently REQUIRES you to not only set up a /boot/efi partition, but also a /boot partition separate from /root.
If you’re in the habit of putting /, you know, operating system and software, on one drive, and /home on another drive, you have to learn from osmosis that part of Bazzite’s immutableness means that there is no /home, there’s a /var/home symlinked to /home.
And if it doesn’t randomly lock up, you’ve got Bazzite installed!
Bazzite markets itself as a newbie friendly Linux. They’ve got that configurator on their website that gives you a little Cosmo quiz about what system you have, what desktop you want etc. which is good! That is good user friendly design. But the actual software you get rattles like a Chrysler. How many noobs are going to bounce right off that?
That’s really too bad. I’ve heard great things about Bazzite, and it’s what I recommend when someone wants SteamOS.
That said, that’s a bit different from what I’m talking about. I’m suggesting OEMs ship a pre-installed Linux desktop, and users are presented an option on setup about which DE to use. So all that would change is enabling one and not the others, but they’d always be present. After install, you could switch between them if desired without messing with the package manager.
I personally use openSUSE (leap on server, tumbleweed on desktop, Aeon Desktop on laptop), and their installer is solid, but I haven’t tried it on a 4k monitor (worked fine on 1440p). Unfortunately, I don’t recommend my distro of choice because it’s not popular enough to have a good newb support network, whereas that’s basically Bazzite’s core demographic.
I don’t recommend Arch forks as a rule, unless it has fantastic support from the maintainers (e.g. SteamOS curates updates). It’s going to by break eventually, and it’s going to require manual intervention (probably minimal), and users will get mad. Maybe it’ll be fine for 6 months or a year, but it will break eventually.
That’s much less likely with something built on Ubuntu, Debian, Fedora, or OpenSUSE. Those all have solid testing and upgrade rules, unlike Arch, which is basically “works on my machine.” I used Arch for years until I got tired of the random breakage, and now I’m on Tumbleweed which has far less breakage and stays reasonably close to Arch package versions.
My first recommendation is either Linux Mint (I prefer Debian edition) or Fedora, because those have good new user experiences and aren’t super opinionated like Ubuntu.
At least some of the problems I reported about Bazzite are inherited from Fedora. Bazzite didn’t create Anaconda.
Fedora has the problem of being generally fine, but most of the world for the last decade has been targeting Ubuntu as THE Linux distro, so there’s a lot if Git repos out there that don’t include instructions for Fedora. Way fewer things are packaged in rpm rather than deb. I’ve never seen Linux Mint kernel panic unless I was fucking around with the video drivers, I’ve seen Fedora kernel panic.
The main reason I’m using Fedora right now rather than Mint is Mint tends to have an older codebase, and we’re at a point in PC technology where things like wayland offer support for video and graphics stuff that don’t work well under X11. like my 1440p ultrawide 144Hz monitor sitting next to a 1080p 60hz side monitor. Fedora KDE has it ready to go, Mint Cinnamon does not.
there’s a lot if Git repos out there that don’t include instructions for Fedora
For new users, if it doesn’t exist in the repos, you’ve gone too far. Don’t look for RPMs or debs, look for your distros package, and failing that, look to add a repo tons of people online recommend for whatever you’re using (e.g. RPMFusion IIRC). The vast majority of what you want will be there.
If it’s something you really can’t live without, ask on the forums for your distro, and wait until you get multiple answers from different people saying the same thing. Give it a few days too.
Installing from source isn’t a bad thing, I do it all the time. But a lot of people will trust some random post on SM and then complain that it doesn’t work or broke their system or something (see LTT’s video where he uninstalled his DE by trying to install Steam). Don’t install from source or random RPMs/debs until you’re comfortable tracking down what dependencies you need and are able to read scripts to make sure nothing funky is going on. Many posts online will be outdated, and with Linux getting more attention, malware is a growing concern.
Mint tends to have an older codebase
Does Mint still not use Wayland?
Having an older codebase is generally good for new users, since the software tends to be more tested and more people will know the workarounds. Newer software will have different issues, so be careful chasing the latest and greatest if you’re not comfortable sifting through logs to figure out what happened.
For new users, if it doesn’t exist in the repos, you’ve gone too far.
I don’t think this holds up under scrutiny. Theoretically sure, installing using your distro’s package manager is the beginner skill, compiling from source is the advanced skill.
The reality is, people transplanting from Windows often own hardware they want to continue to use, that require software that isn’t in a distro’s package manager. For me, this included a DisplayLink docking station, an Epson printer and a SpaceMouse. For some, it will include gaming keyboards or mice, stream decks, who knows what else. A lot of times, there are folks making open source software for these things, but they don’t package them. So you end up on Github as a beginner looking for the thing to make your thing work.
As you migrate into the ecosystem, you start buying hardware that is well supported by the Linux ecosystem, that problem starts to fade away.
by rpm vs deb, I wasn’t meaning downloading individual files…though I’ve done that. DisplayLink offered their driver as a .deb. At first, that Epson printer only issued a .rpm, and I had to use Alien to install a .rpm on a Linux Mint computer. With time, they offered a .deb, and eventually the printer was just natively supported by CUPS. I meant, I find that the Debian/Ubuntu repos (the dpkg/APT system that uses .deb files) have more stuff in them than Fedora’s repos (the DNF package manager that uses .rpm files) do.
Does Mint still not use Wayland?
When I built my current PC, Wayland support in Mint Cinnamon was “We’ve just now added it, it doesn’t work worth a damn but you can try it.” They’re coming along, but they’re behind.
Is an older codebase generally good for new users? The first distro I installed on an x86 PC was Mint Cinnamon 17. Quiana. On a then brand new Dell Inspiron laptop. For about 6 months, the kernel that shipped with the OS didn’t support the laptop’s built-in trackpad. I had to manually update the kernel through Mint Update for the trackpad to work. There’s problems at the bleeding edge, but there’s problems at the trailing edge as well.
I find that the Debian/Ubuntu repos (the dpkg/APT system that uses .deb files) have more stuff in them than Fedora’s repos (the DNF package manager that uses .rpm files) do.
Ah, makes sense. That’s probably because Fedora doesn’t package non-FOSS packages, so you need to use something separate like RPMFusion, and that doesn’t contain everything. There’s usually a repo for what you want, but for something really niche, yeah, Ubuntu will probably have a better chance of having it, followed by Debian.
That said, I really like the way openSUSE does it. Basically, they have OBS, which is kind of like the AUR, but it actually builds packages for you. I think that’s a much better way to handle it than building stuff from source on your local machine, since it allows you to share that package (i.e. dev machine vs other machines you have) and at least track down the dependencies needed since it starts w/ a blank slate. I don’t know if Fedora has something similar, and it’s certainly not a beginner-friendly option (if you’re pulling packages from OBS, you’re probably doing it wrong and will likely run into issues). However, that is the first step to getting something included in the official repos.
But if it’s not in the default repositories, you should definitely talk to someone more familiar w/ the distro to figure out the “right way” to do it. I’ve built .debs and AUR PKGBUILDs, but only after learning from the community the right way to do it to make sure it doesn’t break on an update. New users are unlikely to put in that legwork, hence the recommendation to never use anything outside the default repos w/o asking for help.
There’s problems at the bleeding edge, but there’s problems at the trailing edge as well.
I agree. I guess my point is that if things work w/ an older set of packages, the chance that things will break is incredibly low. Whereas if things work on a bleeding edge distro, there’s a good chance you’ll see some breakage.
For example, openSUSE Tumbleweed is generally a good distro, but there was a week or so where my HDMI port didn’t work, my default sound device changed suddenly and was no longer consistent (sometimes would pick one monitor’s speakers instead of the other, depending on which came online first), and I was stuck on an older kernel for a couple weeks due to some kind of intermittent crashing. This experience was way better than what I had on Arch, and fortunately TW has been uneventful for 2-3 years now (probably because my hardware hasn’t changed).
So for a new user, I recommend finding the oldest distro that supports all the hardware you need. For experienced users, I recommend using a rolling, bleeding-edge distro and reporting bugs upstream as they happen, because the frustration of something breaking randomly is much less than the frustration of multilple things breaking on a release upgrade, and it’s nice to have the latest improvements to performance and whatnot (i.e. I used Wayland on TW way before it landed on any release-based distro, which was awesome since it allowed me to use different refresh rates on each monitor).
For your example, I’d recommend users hop distros until they find one where everything works. If Mint is too old, try Fedora. There’s usually a sweet spot where everything works and you have a reasonably stable experience overall. Even Debian Testing (pinned to the release name, not “testing”) is probably a better fit than Arch or openSUSE Tumbleweed.
Having played with it for a little while now that I’ve got it installed…I think it’s alright for a mostly or entirely gaming machine. I wouldn’t want to use it, or any immutable distro, as my main computer.
I’ve attempted to stay out of the trendy distro of the month club, remember Garuda? Remember Peppermint? Remember Endeavour?
I switched to Bazzite as my daily driver and won’t be switching distros or going back to Windows.
I ran into an issue during install with my main drive previously having BitLocker. Had to clear the drive with a live USB installer. Had another issue with secondary LUKS drive auto-mounting, but was able to address it through the GUI.
Other than that it has been a magical experience. I do full-time work/school on the system.
Yup, I had this exact experience. Installed Bazzite because it was a “gaming OS”. Had trouble just installing any non-gaming apps, or looking up guides to do so. Even gaming wasn’t perfect.
Installed CachyOS, and yes, there are annoyances, but also a nice path to fix them. It’s both a good gaming OS, and a daily driver for casual use.
You forgot the part where the installer fails just right before the end. Every time.
Had this occuring on both my laptop and someone else’s that I was trying to install Bazzite to, which resulted in installing Fedora on their laptop instead (and back to EndeavourOS on my end), and even Fedora’s new installer errored out too. Thankfully the OS was working though.
I am suspecting your 6th point for that one, which even if it wasn’t I consider it a colossal failure on their part because it is NOT TELEGRAPHED AT ALL. I shouldn’t have to stumble upon random forum posts to learn about it, come on.
I had one fail fairly early, giving me a cryptic message because apparently it couldn’t cope with how I’d set up the partitioning.
I’ve had a Linux Mint install fail because it couldn’t cope with a BIOS setting, the error message gave a plain English explanation “it’s probably the XMBT (or whatever acronym) setting in the BIOS, see this page on the Ubuntu wiki for details:” and it gave a hyperlink, because the installer runs in a live environment, it had a copy of Firefox ready to go, AND it gave a QR code so you could easily open that link on a mobile device. THAT’S how it’s done.
I tried to go with Bazzite on my wife’s old PC. Fuck knows what happened, but I could not get it to recognise that I’d downloaded the image with the Nvidia drivers built in.
Ended up giving up and rolling Kubuntu. I know Kubuntu and like it. And it works beautifully. Back in the world of RDR2 now, and loving it.
Yeah, that is nice. I won’t recommend EndeavorOS or any other Arch installer/derivative for other reasons (IMO, every Arch user should do the official install process once or twice to have a better shot at fixing stuff later), but I do like that UX.
I wish more distros did it. My distro (openSUSE) does something similar, but I also don’t recommend it because the community isn’t all that good for new users IMO.
That tracks since I left Arch about 5 years ago, maybe a little longer, and I used it for at least 5 years.
I used it through the /usr merge which broke nearly everything, and for a few years of stability afterward. But even when it was super stable, there were still random issues a couple times each year. It wasn’t anything big (I’ve been a Linux user for 15 years or so), but it did require knowing what to do to fix it (usually documented clearly on the Arch homepage). This was especially true for Nvidia updates. After switching to openSUSE Tumbleweed, most of those went away, and even the Nvidia breakage seemed less frequent, and if something broke, I could easily snapper rollback and wait for a fix, whereas on Arch I had to fix things because going back wasn’t an option (I guess you could configure rollbacks if you had that foresight).
I just took a look, and it looks like manual intervention is still a thing. For example, the June 21 Linux firmware change required manual intervention. There were others over the last year, depending on the packages you use or your configuration.
That’s totally fine for Linux vets, but new users will have issues eventually. In don’t even recommend my distro, which solves most of those issues, because new user support isn’t there. The main reason I left was because I wanted to switch to btrfs (for snapshot rollbacks), and Tumbleweed had that OOTB so I gave it a shot.
The main reason I left was because I wanted to switch to btrfs (for snapshot rollbacks), and Tumbleweed had that OOTB so I gave it a shot.
This is precisely why I went with Tumbleweed as well. I wanted a rolling release distro because having initially gotten into Linux via Ubuntu back in 2007, I didn’t really like the “upgrade twice a year to keep up to date with new features” method. It felt really cumbersome back then, as a regular distro upgrade often brought problems with it.
When I looked into other features I wanted, I discovered Snapper and I was all “that’s the one for me!”
I don’t even remember my progression. I do remember what first piqued my interest though. A guy came from BUIT (Barn-och-ungdoms IT enhet), which no longer exists, and he was troubleshooting some IT stuff at my school back in 2003. Being the nosy and tech-interested bratty nerd that I was, I hovered around the guy. He was super nice, and had no problem with my prodding questions about his laptop, which was running Red Hat Linux. He explained in simple terms what exactly that meant, and it stuck with me.
Then, years later when I found out about Ubuntu (at the library I think) and the fact that they sent out LiveCDs I was like “Yes please!” and the rest is history. I didn’t use Linux for many years, between having hardware that didn’t play nice with it, and just not feeling like it. Then the other year I went back to Linux and been using it since.
Every so often I boot into Windows to do some texture work in Substance Painter, but I don’t think that’s going to last. I’m very keen on trying Armor Paint, and if I like the workflow there I might as well wipe Windows entirely.
For me, I went to the local community college in high school, and an old guy was in my Java class and gave me a FreeBSD CD. I installed it and played around with it for a year or two, but still used Windows. When I went to uni, I got an Ubuntu CD on campus and installed it on my rental, and later that year the Windows XP install had issues but Ubuntu was fine, so I switched.
Now, if only I could run Linux on my work PC.
I had that at my last job, but my current one uses macOS. At least it’s close enough to Linux on the CLI…
I’m stuck on Windows 11 at work. It’s not a bad laptop, but Windows is insanely slow. Opening the commandline isn’t instant. Explorer takes well over a second to open. It’s like treacle.
Three options is too many? If one is already selected, you can just click through without thinking. Windows already does that stupid “setting up your PC” crap, and this would be far faster.
Sure. If you have all three options be properly configured, it shouldn’t matter too much which you pick. The point is to make it apparent that you can change stuff, if you want.
Do you know why Mac is successful? Because they have extremely few options. You basically have 3 laptops to choose from. That’s not 3 software options, that’s basically 3 hardware options.
I don’t think that’s why. I think it’s more the features that work with the iPhone that are selling Apple laptops. If you want to use iMessage or iCloud between your phone and computer, you need both to be from Apple. That, plus the better performance and battery life of the M-series is more the cause of increased market share, not the single desktop offering.
That is exactly why they are successful, wayyyyyyy before iphones even existed. People don’t have to think about anything. I think I’m going to leave this conversation.
Looking at market share stats, macOS market share is stagnant up until 2010-2015 or so, when it jumps from 6% to 12% or so, and that’s also about when iPhone became dominant. They’re currently around 15-17%, probably because the M1 series is so much better than x86 alternatives, so if you don’t need gaming or anything, it’s a great option! That wasn’t true before the M1.
If it’s all up to the one choice, why didn’t they take off before the 2010s? macOS has been remarkably the same since pretty much forever, unlike Windows, which changes a lot each release.
I think the “friendly” distros like Linux Mint with built-in driver detection/management and pretty broad package repositories (surfaced as an “App Store”) are probably to the point where many normal people could use them, without significantly more technical chops than Windows. Particularly as a gaming rig where you basically just need Firefox, LibreOffice and Steam.
The issue with that is, people have no idea what these “choice” even mean. SteamOS is SteamOS, Windows 11 is Windows 11, MacOS is MacOS, but Linux is a big list. If pushing adoption is the key purpose, the manufacturer need to pick one that they believe is reliable and in active development. Just one. All these editions will very likely cause choice paralysis, which lead to people deem it as “too complicated”.
Its become abundantly clear to me over the past few years that Linux is in place where, to get significant share it needs to have a major figurehead. Imagine if all ThinkPads suddenly were only available with Lenovo’s own fork. That kind of thing.
Unfortunateoy, that’s kinda the opposite of Linux ethos, and not necessarily likely to make Lenovo much money.
So the best we can really hope for at this point is a company with the brand awareness of Valve pushing SteamOS into the mainstream. People who play games know and generally trust Valve, so people (like my wife) who are on the fence, or who just need their computer to work without needing too much faffing, could likely trust SteamOS in a way they wouldn’t necessarily trust Bazzite or CachyOS.
I’d guess Valve wants whatever makes more games work on Linux so that their Steam Deck works better and is more compatible.
And that means the most important thing is Linux desktop adoption by game developers so they make more native games. So somewhat ironically, I don’t think SteamOS would be as high a priority as other distributions, since it focuses on players instead of developers.
A lot of games received their ports during the Steam Machine era, used outdated technologies like DirectX to OpenGL translation, and never got updated, so it’s not surprising unfortunately.
I can attest that SteamOS does work on my rigs that are AMD gpu/cpu. It actually works great. I haven’t had one single issue. But I don’t do multiplayer games either.
I have 4 computers, only gaming one is still running Windows, other 3 were moved to Linux few years ago when Microsoft started with forced online accounts bs because I couldn’t be bothered dealing with stupid bypasses. Two are running Ubuntu, one is running Fedora. Those are never going back to Windows.
gamingonlinux.com
Najstarsze