Following the immense success of the Stop Destroying Videogames initiative
Wait, what immense results have come from it?
In any case, getting a proper hearing is an achievement. Hopefully it actually happens. Right now the headline says it is happening, and the article only shows that it should happen.
With any luck, it’s because this issue is such a slam dunk that it’s got broader support than more divisive initiatives. In reality though, it’s most likely just because YouTuber drama got more eyeballs on it; and if that’s the difference here, the EU really ought to re-examine how they do these initiatives. 1M signatures out of a population of 440M is a tough bar to clear.
Disappointment is one thing. If you ignore and downplay some of the “boycott” and “cancel” comments on this post, you’re just hiding your head in the sand about the lunacy.
GOG is still one of the good ones, if not best ones. Don’t delude yourselves with your better-than-thou unplaced arrogance.
As if you all were immune to mistakes and the occasional tripping 🙄…
Ah yes, because the lexicology must match fucking exactly? You clearly don’t have the brainpower to actually scour the comments yourself and depend on document searches. Idiot.
This is so disappointing and I’m so sorry that the people at GOG received some AI-hype-bro who had enough leverage to get the AI banner posted. In my mind I can hear them, against all the negative posts/comments, go “It’s not just a phase, mom moneybag!” and see GOG double down on this course.
Here's the secret: talk a big game about being pro AI in the interview, then just don't use it on the job. What are they gonna do, grab your hands and put your mouse cursor on the Copilot button?
Copilot business subscriptions have fairly granular usage tracking, so they’d probably just replace you right away with someone who isn’t quite so reserved. Looking at the comments here and in other places, there is certainly no shortage of such people.
Wow that's pathetic. You can just smell the desperation to turn a profit, they baked their agita right into their dashboard. And it's quite the dark pattern too, in showing the administrator adoption rates, it singles out the team with the lowest adoption for harassment.
Welp, there's still value in fucking with them I suppose. Send Copilot a "write me an email with no intent to send" request every so often, and you can bump your numbers up.
Honestly if part of their job is at all trying to get old shit to run on new operating systems AI is very useful for that task.
Part of my job is keeping a 30 year old c++ application compiling and building on newer versions of Linux. LLMs have made this a far easier experience.
I don’t want to say you’re totally wrong, but I am skeptical of the benefit. Sure, maybe it works now, which is cool, but is it making changes that are maintainable? The next time someone does this is it going to work? If we just constantly have LLMs update code, when does it start breaking, and when it does is it going to be in a state someone can fix?
Im not generally making source code changes. It’s the dependencies.
Mainly we’re talking about building very old versions of things like libpng. Making things like autoconf and configure and cmake all work is a pain in the ass as their versions slowly change.
The business would be content to let it run on Ubuntu 12 until it’s a major problem so I can’t let the perfect be the enemy of good.
Fair enough. Probably a good use case for it. I’ve found it’s pretty reliable at creating boilerplate. I just wouldn’t trust it for doing anything important.
I wish though they were a bit more friendly towards Proton/Wine. Knowing what games are compatible and to what degree before purchasing them would make me happier.
There’s also a very generous 30 day refund policy, so if you’re at all unsure, make sure it’s working in that first month. I was pretty close to refunding The Alters, because that game just barely works via Proton, even with the right workarounds. Hell of a game though.
Sure, but you take the good with the bad. Most games work, and you get to actually own a copy via GOG. Hopefully they do proper integration with Proton in the future, and this position they’re hiring for may very well lead to that. There’s the option to buy games through Heroic, which gives Heroic a cut of GOG sales, so I’m sure to always do that so that I send the signal to GOG what’s important to me and how they can earn my whole dollar.
Well I will not share a screencast where a local LLM helps with code completion on a private project. You talk like you’re a proficient developer, you can try that on your own. And where is the fallacy?
We’ve got studies that show AI makes you feel more productive while you’re actually less productive. And all you’re offering is a feeling you feel. Get high on your own supply if you want, but don’t drag down good companies with your evangelism.
Don’t act so stupid, dude. You know what post you’re in, or at least I hope you do.
If you want to claim that AI can magically do something that not even AI companies themselves can prove, then prove it. Ed Zitron has been begging AI evangelists like you to prove it for at least a year now. Otherwise, I call bullshit on your evangelism.
AI is not a monolith; there are a lot of tools out there that you don’t hear about because all the focus is on the large, corporate models that are meant to dehumanize. LLMs like Gemini, Grok, and ChatGPT are awful inventions that should be dismantled, but smaller ML projects found on GitHub shouldn’t be lumped in, as the few that survive the bubble will stick around because they prove to be effective.
Hey I’m against corporate AI too, but when anyone can create a very basic ML program that runs locally with public domain data, eventually something both useful and ethical will emerge. It’s good to be skeptical, but you don’t have to be an AI bro to see that some specific tools might meet or exceed your standards.
I don’t like image or video generators, but the core tech is really useful for frame interpolation, a usecase that is not inherently controversial and badly needs improvement.
Sorry to not-x-it’s-y, but it’s not about forcing the big tool into your workflow, it’s about finding the 1001 little tools that work every time and collecting them. Or, wait for these tools to be consolidated.
If I seem naive, It’s cause I believe in reclaiming as much from tainted technology as possible.
Considering the GOG announced their AI usage to the world with an AI-generated image, and the technology currently cannot be remotely useful without being extremely unethical, I do not share your optimism.
There’s plenty of real technology that can be reclaimed right now, though! From textile machines to lithium ion battery technology, the world is your oyster.
So you agree those models have already been made, and running them no longer require 50 exawatts of power, right? Not sure why you decide to change the context to training the models instead of running it like the other guy was claiming.
(As if you genuinely believe those are the ones GOG is using.)
I thought the context was changed to general use of LLM as a tool for programmers, not specifically about GOG? Can’t even double check it now because the mod removed the comment for some reason.
None of what you brought up as a positive are things an LLM does. Most of those things existed before the modern transformer-based LLMs were even a thing.
LLM-s are glorified text prediction engines and nothing about their nature makes them excel at formal languages. It doesn’t know any rules. It doesn’t have any internal logic. For example if the training data consistently exhibits the same flawed piece of code then an LLM will spit out the same flawed piece of code, because that’s the most likely continuation of its current “train of thought”. You would have to fine-tune the model around all those flaws and then hope some combination of a prompt won’t lead the model back into that flawed data.
I’ve used LLMs to generate SQL, which according to you is something they should excel at, and I’ve had to fix literal syntax errors that would prevent the statement from executing. A regular SQL linter would instantly pick up that the SQL is wrong but an LLM can’t pick up those errors because an LLM does not understand the syntax.
I’ve seen humans generate code with syntax errors, try to run it, then fix it. I’ve seen llms do the same stuff - it does that faster than the human though
But that extra time is then wasted because humans still have to review the code an LLM generates and fix all the other logical errors it makes because at best an LLM does exactly what you tell them to do. I’ve worked with a developer who did exactly what the ticket says and nothing more and it was a pain in the ass because their code always needed double checking that their narrow focus on a very specific problem didn’t break the domain as a whole. I don’t think you’re gaining any productivity with LLMs, you’re only shifting the work from writing code to reviewing code and I’ve yet to meet a developer who enjoys reviewing code more than writing code, which means code will receive less attention and thus becomes more prone to bugs.
formatters, style guides, linters, boilerplates, translation, configuration etc.
None of that is “AI” dumbass. Stop watering down the terminology.
LLMs run from cloud data canters are the thing that everyone is against, and that is what the term “AI” means. No one thinks IntelliSense is AI; no one thinks adding jslint to your CI pipeline is AI.
That’s only if the HR knew what they were talking about when crafting the listing. Not saying GOG will use AI for good, but we don’t know if the job will require something like ChatGPT or something in-house that isn’t like GPT.
Both of those things put a lot of people out of work, but our economy adapted, and there was nothing to be gained by shaming the people embracing the technology that was clearly going to take over. I’m not convinced AI tools are that, but if they are, then nothing can stop it, and you’re shaming a bunch of people who have literally no choice.
When you say, “I’m just expressing an opinion, why are you doing the same”, then yes, it is denying them the same right. Don’t play games with me, we’re not the idiots you seem to think we are.
And as soon as your hypocrisy is called out, you resort to name-calling. Belligerence isn’t an argument, even if you fail to think of one in its place.
And you’re incapable of formulating a good argument, and so you resort to whatever this is an attempt of, in order to have the last word and escape the embarrassment of knowing you can’t articulate your own thoughts.
Don’t feel bad though. You’re doing your best, and that’s all that matters.
I used an example of two technologies that were destructive and inevitable, now both definitely parts of your daily life, to show how silly it is take a stance against a technology like that. I don’t need to work at GOG for that to be the case. And to reiterate, AI might not be inevitable. If it’s not, this problem takes care of itself economically, and you don’t need to shame anyone.
I believe I did answer your question, though I’d disagree with the idea that I’m “defending” anything. There exists nuance between “pro AI” and “anti AI”.
Those things didn’t destroy communities, pollute the earth, wrestle personal computing away from the populace, use up all the drinking water in an area, and provide a near total and realtime panopticon of everyone, everywhere, at all times, while stealing all the collected works of said society in order to be built without penalty at a time when ordinary folks are ordered to pay hundreds of thousands of dollars because they posted a social media video of their kid dancing to a song that was playing on broadcast radio.
But sure keep boiling in that pot because you don’t need to do all the boilerplate for your fucking Node project or whatever. Fucking frog.
It is the role of government to regulate those problems, but you can’t uninvent a technology. As for me in my work, the most I can say is that I almost used AI once; a coworker did it for me before I could get to our company approved AI page. That, plus other companies mandating its usage (if it was really so great, it wouldn’t be difficult to convince anyone to use it) is why I’m not confident that it is one of those inevitable technologies. But if it is, being a dick to people about it is stupid.
You’re talking about the worst of AI, which I agree should be dismantled. There are many smaller projects that do not do the things you mentioned, and it’s possible to support those while shunning corporate AI.
What’s the opposite of a Luddite? Hey, if you enjoy bending over and letting techbros do with you what they will, don’t expect me to provide any lube. Now run along and ask ChatGPT how you should feel about this response.
Anti AI arguments are always good for a laugh. I enjoy using the best tools for my job. Sometimes the best tool for software development is AI. Sometimes AI does a bad job and other tools work best. What does that make me? A software developer, I suppose.
I’m not arguing that using AI makes me a software developer. I’m saying that as a software developer, I seek to use the best tools for the job, just like any other job.
So artists, wanting to use what you describe as the best tool for the job, should just use AI? How even is AI the best tool? This is an inane and pointless argument with a drug addict defending heroin.
If it’s the best tool then why is there so much push back from developers? Are they wrong and for some reason people like you are right? Is it not causing layoffs in that sector?
There’s no push back from the developers at my company or the developers that I interact with. We have been embracing it for the use cases that AI is good at. There is lots of manual effort that us software developers don’t like doing that AI automates easily. I believe the mass layoffs that companies are doing are using AI as a convenient excuse in uncertain economic conditions.
AI Bros are always good for a laugh. They can’t point to any industry successes, pretend massive industry failures like Microsoft don’t count, and generally trust their own feelings over facts.
I don’t need to prove anything, but mostly, your issue seems to be that you think a shitty in-painting image model has anything to do with the usefulness of something like Github Co-Pilot.
If you don’t understand something it’s ok not to have the edgy opinion on it by default.
Which means calling some anti-AI people Luddites make perfect sense, no? Many of them have just as valid of a worry and fear as the Luddites did.
Of course, once the anti-AI sentiment goes mainstream, the amount of idiots who are irrationally anti-AI also increases, and these ones are not worth listening to, unlike the Luddites-like ones.
I would love to. They deserve a little lovin’ for all the work they put in preserving games. 10/10, would buy games from them over Steam any and every day.
Actively use and promote AI-assisted development tools to increase team efficiency and code quality
Probably the boss of the person who had to write the job opening demanded they include something about AI, and the person who wrote it decided to turn their sarcasm up to 100. The only way to make it more clear would have been sarcastic casing:
Actively use and promote AI-assisted development tools to InCrEaSe TeAm EfFiCiEnCy AnD cOdE qUaLiTy
I love that one false dichotomy argument the one poster presents, where being anti-AI is anti-progress in all its forms, and being PRO-AI is the only way to a post-scarcity society. That’s definitely the only two possibilities, for sure.
gamingonlinux.com
Najnowsze