doesn’t have to be an ethical nightmare. Public domain datasets on local hardware using renewable eletricity, who’s mad now, the artist you already can’t afford to pay because you have no fucking money anyway?
Training an AI is orthogonal to copyright since the process of training doesn’t involve distribution.
You can train an AI with whatever TF you want without anyone’s consent. That’s perfectly legal fair use. It’s no different than if you copy a song from your PC to your phone.
Copyright really only comes into play when someone uses an AI to distribute a derivative of someone’s copyrighted work. Even then, it’s really the end user that is even capable of doing such a thing by uploading the output of the AI somewhere.
The only sane and ethical solution going forward is to force to opensource all LLMs.
Jesus fucking christ. There are SO GODDAMN MANY open source LLMs, even from fucking scumbags like facebook. I get that there’s subtleties to the argument on the ProAI vs AntiAI side, but you guys just screech and scream.
Then you should provably know that image gen existed long before MLLMs and was already a menace to artists back then.
And that MLLM is generally a layered combo of lots of preexisting tools, where LLM is used as a medium that allows to attach OCR inputs and give more accurate instructions to image gen AI part.
Beyond the copyright issues and energy issues, AI does some serious damage to your ability to do actual hard research. And I'm not just talking about "AI brain."
Let's say you're looking to solve a programming problem. If you use a search engine and look up the question or a string of keywords, what do you usually do? You look through each link that comes up and judge books by their covers (to an extent). "Do these look like reputable sites? Have I heard of any of them before?" You scroll click a bunch of them and read through them. Now you evaluate their contents. "Have I already tried this info? Oh this answer is from 15 years ago, it might be outdated." Then you pare down your links to a smaller number and try the solution each one provides, one at a time.
Now let's say you use an AI to do the same thing. You pray to the Oracle, and the Oracle responds with a single answer. It's a total soup of its training data. You can't tell where specifically it got any of this info. You just have to trust it on faith. You try it, maybe it works, maybe it doesn't. If it doesn't, you have to write a new prayer try again.
Even running a local model means you can't discern the source material from the output. This isn't Garbage In Garbage Out, but Stew In Soup Out. You can feed an AI a corpus of perfectly useful information, but it will churn everthing into a single liquidy mass at the end. You can't be critical about the output, because there's nothing to critique but a homogenous answer. And because the process is destructive, you can't un-soup the output. You've robbed yourself of the ability to learn from the input, and put all your faith into the Oracle.
I’m pretty sure that generating placeholder art isn’t going to ruin my ability to research
AIs need to be used TAKING THEIR FLAWS INTO ACCOUNT and for very specific things.
I’m just going to be upfront: AI haters don’t know the actual way this shit works except that by existing, LLMS drain oceans and create more global warming than the entire petrol industry, and AI bros are filling their codebases with junk code that’s going to explode in their faces from anywhere between 6 months to 3 years.
There is a sane take : use AIs sparingly, taking their flaws into consideration, for placeholder work, or once you obtain a training base on content you are allowed to use. Run it locally, and use renewable sources for electricity.
as someone who has studied ml since around 2015, i’m still not convinced. i run local models, i train on CC data, i triple-check everything, and it’s just not that useful. it’s fun, but not productive.
Is that a problem with the existence of llms as a technology, or shitty corporations working with corrupt governments in starving local people of resources to turn a quick buck?
If you are allowing a data center to be built, you need to make sure you have power etc to build it without negativitely impacting the local people. It’s not the fault of an LLM that they fucked this shit up.
Are you really gonna use the “guns don’t kill people, people kill people” argument to defend LLMS?
Let’s not forget that the first ‘L’ stands for “large”. These things do not exist without massive, power and resource hungry data centers. You can’t just say “Blame government mismanagement! Blame corporate greed!” without acknowledging that LLMs cease to exist without those things.
And even with all of those resources behind it, the technology is still only marginally useful at best. LLMs still hallucinate, they still confidently distribute misinformation, they still contribute to mental health crises in vulnerable individuals, and no one really has any idea how to stop those things from happening.
What tangible benefit is there to LLMs that justifies their absurd cost? Honestly?
You misunderstood, I wasn't saying you can't Ctrl Z after using the output, but that the process of training an AI on a corpus yields a black box. This process can't be reverse engineered to see how it came up with it's answers.
It can't tell you how much of one source it used over another. It can't tell you what it's priorities are in evaluating data... not without the risk of hallucinating on you when you ask it.
IS VTM Bloodlines (not to be confused with VTM Redemption or VTM Dating Sim Game) particularly known for being RP heavy? Or, at least, should it be?
Because I played Bloodlines and I even did a replay a few years back. Yeah, the hub city sounds a lot more “alive” than in the article… in the sense that just about every NPC had an interaction. But we also had maybe like ten NPCs (outside of the people dancing to Kidneythieves. Great band). This kind of feels like any game where the engine/tech is at the point that we can handle actual crowds but… how many people in a crowd do you expect to have a conversation with you if you walk up to them?
And the rest of the moment to moment gameplay really did feel like corridors with a few vents you could walk through and a LOT of people to mow down.
Don’t get me wrong. I really would prefer a VTMB that is what we “remember” VTMB to be and not what it actually was. But… that article very much sounds like just about every Troika game: Some REALLY cool dialogue based quests that have little to no bearing on the game outside of what XP bonuses you get. And then pretty janky ARPG combat once they ran out of money. And in that sense? Get the banner because it is Mission Accomplished.
Also my memories of VTM Redemption are that it was some super deep CRPG spanning centuries and was amazing. But I am pretty sure it was also an ARPG with some dialogue. I should replay that…
Not sure if this is what they’re talking about, but they made an ARG as part of the early VTMB2 promotion stuff. You had to do stuff within a fake dating app called Tender that was made in-universe to help vampires find feeding targets.
I don’t think any CRPG is RP heavy. I don’t think it’s possible for a computer game to roleplay. Well, maybe if they integrate LLMs? But generally, any RP happens entirely in the player’s head.
I think CRPGs that are sometimes seen as RP heavy, are just very well written. That’s not the same thing, although I suppose it’s possible for good writing to inspire RP in the player. But that depends as much on the player as it depends on the writing.
I enjoyed the original VTMB a lot, mostly because of its excellent atmosphere.
Man I remember interviewing to work on this game back in 2017. That was back when hardsuit labs was working on this game. How many studios has this passed through at this point?
It was just Hardsuit and The Chinese Room, until hardsuit was eventually removed from the project. The game did however get delayed a handful of times. Let’s just hope that whatever concept they have of the game now is strong enough to ship this time.
Not defending Paradox’s scummy business practices in any way, but by and large Paradox games’ DLC usually came after their games have been out for a while. What’s happening with VtM:B2 is a whole other level of shit.
Paradox DLCs also classically add a ton of content every single time. Sure Stellaris kind of sucks as a new player because there’s $260 of content, but it’s perfectly playable and even good with only the base version and then you pick whatever new content you like as you want more of it. Rimworld has the exact same strategy and I don’t see people complain about that. They release a complete game without any obviously missing parts and then keep bolting on cool new extra parts for the next 10 years.
All that to say, yeah this is kind of out of character for Paradox. Which does have me concerned about this.
In their defence, and I hope, in the case of the first game, each clan was an entire new storyline and it was awesome. I might buy it, if the base game is on special in 2 years for 30$ and the DLC is 10$ on special
eurogamer.net
Aktywne