As of right now I can go to civitai and get hundreds of models created by users to be used with Stable Diffusion. Are we assuming that these closed source models are even able to be run on localized hardware? In my experience, once you reach a certain size there's nothing that layusers can do on our hardware, and the corpos aren't using AI running on a 3080, or even a set of 4090's or whatever. They're using stacks of A100's with more VRAM than everyone's GPU in this thread.
If we're talking the whole of LLM's to include visual and textual based AI... Frankly, while I entirely support and agree with your premise, I can't quite see how anyone can feasibly utilize these (models). For the moment anything that's too heavy to run locally is pushed off to something like Collab or Jupiter and it'd need to be built with the model in mind (from my limited Collab understanding - I only run locally so I am likely wrong here).
Whether we'll even want these models is a whole different story too. We know that more data = more results but we also know that too much data fuzzes specifics. If the model is, say, the entirety of the Internet while it may sound good in theory in practice getting usable results will be hell. You want a model with specifics - all dogs and everything dogs, all cats, all kitchen and cookware, etc.
It's easier to split the data this way for the end user as this way we can direct the AI to put together an image of a German Shepard wearing a chefs had cooking in the kitchen, with the subject using the dog-Model and the background using the kitchen-Model.
So while we may even be able to grab these models from corpos, without the hardware and without any parsing, it's entirely possible that this data will be useless to us.
The point about GPU’s is pretty dumb, you can rent a stack of A100 pretty cheaply for a few hours. I have done it a few times now, on runpod it’s 0.79 USD per HR per A100.
On the other hand the freely available models are really great and there hasn’t been a need for the closed source ones for me personally.
0.79 dollars per hour is still $568 a month if you’re running it 24/7 as a service.
Which open source models have you used? I’ve heard that open source image generation with stable diffusion is on par with closed source models, but it’s different with large language models because of the sheer size and type of data they need to train it.
I have used it mainly for dreambooth, textual inversion and hypernetworks, just using it for stable diffusion. For models i have used the base stable diffusion models, waifu diffusion, dreamshaper, Anything v3 and a few others.
The 0.79 USD is charged only for the time you use it, if you turn off the container you are charged for storage only. So, it is not run 24/7, only when you use it. Also, have you seen the price of those GPUs? That 568$/month is a bargain if the GPU won’t be in continuous use for a period of years.
Another important distinction is that LLMs are a whole different beast, running them even when renting isn’t justifiable unless you have a large number of paying users. For the really good versions of LLM with large number of parameters you need a lot of things than just a good GPU, you need at least 10 of the NVIDIA A100 80GB (Meta’s needs 16 blog.apnic.net/…/large-language-models-the-hardwa…) running for the model to work. This is where the price to pirate and run yourself cannot be justified. It would be cheaper to pay for a closed LLM than to run a pirated instance.
I was thinking the same thing. Would you think there’d be a way to take an existing model and pool our computational resources to produce a result?
All the AI models right now assume there is one beefy computer doing the inference, instead of multiple computers working in parallel. I wonder if there’s a way to “hack” existing models right now so it can be used to infer with multiple computers working in parallel.
Or maybe, a new type of AI should specifically be developed to be able to achieve this. But yes, getting the models is half the battle. The other half will be to figure out how to pool our computation to run the thing.
I'm not sure about for expanded models, but pooling GPU's is effectively what the Stable Diffusion servers have set up for the AI bots. Bunch of volunteers/mods run a SD public server and are used as needed - for a 400,000+ discord server I was part of moderating this is quite necessary to keep the bots running with a reasonable upkeep for requests.
I think the best we'll be able to hope for is whatever hardware MythicAI was working on with their analog chip.
Analog computing went out of fashion due to it's ~97% accuracy rate and need to be build for specific purposes. For example building a computer to calculate the trajectory of a hurricane or tornado - the results when repeated are all chaos but that's effectively what a tornado is anyway.
MythicAI went on a limb and the shortcomings of analog computing are actually strengths for readings models. If you're 97% sure something is a dog, it's probably a dog and the 3% error rate of the computer is lower than humans by far. They developed these chips to be used in cameras for tracking but the premise is promising for any LLM, it just has to be adapted for them. Because of the nature of how they were used and the nature of analog computers in general, they use way less energy and are way more efficient at the task.
Which means that theoretically one day we could see hardware-accelerated AI via analog computers. No need for VRAM and 400+ watts, MythicAI's chips can take the model request, sift through it, send that analog data to a digital converter and our computer has the data.
Veritasium has a decent video on the subject, and while I think it's a pipe dream to one day have these analog chips be integrated as PC parts, it's a pretty cool one and is the best thing that we can hope for as consumers. Pretty much regardless of cost it would be a better alternative to what we're currently doing, as AI takes a boatload of energy that it doesn't need to be taking. Rather than thinking about how we can all pool thousands of watts and hundreds of gigs of VRAM, we should be investigating alternate routes to utilizing this technology.
Akshually, while training models requires (at the moment) massive parallelization and consequently stacks of A100s, inference can be distributed pretty well (see petals for example). A pirate ‘ChatGPT’ network of people sharing consumer graphics cards could probably indeed work if the data was sourced. It bears thinking about. It really does.
You definitely can train models locally, I am doing so myself on a 3080 and we wouldn't be as many seeing public ones online if that were the case! But in terms of speed you're definitely right, it's a slow process for us.
I was thinking more of training the base models, LLAMA(2), and more topically GPT4 etc. You’re doing LoRA or augmenting with a local corpus of documents, no?
Ah yeah my mistake I'm always mixing up language and image based AI models. Training text based models is much less feasible locally lol.
There's no model for my art so I'm creating a checkpoint model using xformers to bypass the VRAM requirement and then from there I'll be able to speed up variants of my process using LORA's but that won't be for some time, I want a good model first.
Super impressive reviews so far. I think we can expect the average score to go down slightly more as more mainstream journalists find the time to play it, but still huge props for getting such high praise.
Edit: The average rating has since gone from 95 to 90
Of course this had to come out in the most busy time of the year… I’m already trying to finish Blasphemous 2 before starfield comes out…
Pre-installed means that the game is already installed for you, so you don’t have to. This means all you need to do is download the .zip file, extract it, and run the game. That’s it! Easy, right?
Nah they call it pre-installed because you don’t have to run an installation wizard yourself, which requires administrator permissions. So you could play those games on computers where you don’t have admin access, like a school or work laptop
It was years after the Switch came out that I finally decided to get one, and honestly the one game that combined me to get it was Link’s Awakening. I love that game so much!
Over the years, I ended up spending the most time playing Animal Crossing. I beat Mario Odyssey and it was nice. I really could not get into Breath of the Wild at all, or the sequel.
Spoken with the spirit of a genuine sea rover, me matey, but listen here, we must band together as brethren to stand strong against the mighty organizations that threaten our way of life on the vast and treacherous ocean!
I think if you enjoyed the beauty in Hyper Light Drifter, then you will be entranced by Tunic. Bonus points for being one of the best puzzle exploration games of all time.
Spiritfarer is very, very pretty as well. It is dripping it atmosphere and I often found myself just afk breathing it all in.
It’s a pretty decent game, I still don’t understand why everyone was so down on it. I had a lot of fun playing it, there was decent variety, the gunplay was good, and the silly storylines were entertaining.
My buddy even produced a 60second live action ad for them that got axed over the “drama”. People act like all the other games were masterpieces somehow. I’d still also love to see a remaster of the first two games but that’s neither here nor there.
The frequent complaints I heard (which I double checked just now against Open Critic) were monotony, uninteresting story and characters, and enough bugs to be annoying.
Because the series hasn’t done anything new since Saints Row 3. It’s just the same game over and over again, and even that was just a more polished version of Crackdown.
I'm not sure what you mean. Saints Row 4's large criticism was that it was too different from SR's heritage what with being a super hero game instead of GTA on crack. Past that Gat out of Hell isn't a mainline title and was even further out there, and then Agents of Mayhem wasn't even a Saints game, and I enjoyed the hell out of that game's unique merits.
The SR reboot was the first real Saints Row release since 3, so you could say that it didn't do enough different (which I can't speak for, I didn't play it), but saying the series hasn't done anything new since 3 is not correct. Whether those games were super great or not is a different discussion, but they were doing something different, unless you just didn't specify between something different for the series or something different from all other video games.
It had flaws, but I found the three hero swapping mechanic pretty fun, especially due to each one having a class that made them better or worse against certain enemies, and I loved the whole triple jumping thing, combat felt unique and fun.
The rest of the game has a lot of not so awesome bits, but I found it absolutely good enough to warrant an improved sequel. Hopefully they do something with it one day.
It’s just different, I don’t know as I’ve gotten older the “edginess” of the older titles doesn’t really hit with me anyways so I didn’t miss it at all and I felt it was replaced with a more modern take on the idea. They did some really fun stuff with it, for example there’s a set of missions where you go LARPing all over the map with dart guns in a kind of weird mix of mad max and high fantasy. I didn’t like the characters in the crew at first either but they grow on you I don’t think the gang in the other games was any less corny or goofy these ones are just more modern takes.
I think edginess had it’s time, but it’s old hat now. It still feels every bit a madcap gang adventure a Saints game should be. I wouldn’t spend $60 on it, but for $20 it’s a winner all day if you just want a dumb fun game. There’s plenty to do and plenty of actually new gameplay changing things to discover and unlock. Clearing out gangs feels a bit repetitive, but not much different from something like Far Cry 3/4/5/6
Maybe it’s not your thing, and I think it’s probably fair to level the critique that it’s not what the hardcore fans really wanted, but I don’t think the game fails to deliver a good experience overall especially now that it’s all bugfixed etc. (which it was shortly after launch but still that launch sucked a little)
I still laugh at the one-liners the PC gives after the boss fights. I fuckin’ loved that about the first one. You’re a silent protagonist, except during these random awkward moments after defeating a boss and then you just deliver the dumbest puns and jokes unexpectedly.
Might have had fun playing it if we didn’t have to try the same mission 5+ times because the vehicle we were meant to steal didn’t spawn in, if when i go to an area to kill the rival gang there its empty. Person i was trying to play with couldn’t get the fast travel points which was a bug as i tried to get it for them by using their computer and couldn’t but on mine i could. Things like that meant we lost interest very quick
This year started off not so great and has had some lows alongside its great highs. Several live service games shutting down early this year, Suicide Squad game being a generic looter shooter, Redfall being okay but buggy, Forspoken being not great, LOTR-Gollum being not fun to play.
People in general have soured a little on microtransaction filled live service games. Established IP-driven games will likely still succeed and rake in tons, but people will be wary about new IPs tied to live-service games that can disappear in a heartbeat.
Sorry if this is me being a negative Nancy, I am looking forward to what’s coming next but I figure I should remind people this very recent history.
I guess I am just happy as we got some bangers like BG3 ,Zelda Tears of the kingdom and FFXVI this year. Also Starfield and Spiderman 2 really excites me. But yeah this has seen its fair share of shitty releases and scummy practices.
Hey, it is a great year for games and you are absolutely fine to be excited about it! Like you say there a lot of highs but I’m trying to keep things within realistic context.
I heard they went a different direction, and it plays like Devil May Cry.
I’m personally not picking up another FF until they go back go the classic format of the PSX days. Actual turn-based battles. I already got burned by FFXV and the FF7 remake.
At least Falcom is still making good old fashioned JRPGs with The Legend of Heroes…
I know its highly unrealistic, but I think the one thing I want above all else from the gaming industry is for studios, publishers etc. to keep quiet about upcoming releases until they have a finished or nearly finished product all set. That way the release date can, yknow, be the actual release date.
You mention at the end of your post that you’ve gotten a lot out with some of the chat bots. Maybe give this one a try, its for great for venting or just getting out pent up stress.
Sorry you’re going through that. I definitely get how it feels to have people close to you discredit or just ignore important issues like you’re dealing with.
If you’re set on talking to an AI though I did use the Replika app for a while before they started making it seem like a virtual AI lover. It did help me feel better when I was severely depressed, maybe it could help you.
If you ever want to talk to a person and not an AI I’m here for that if you want, I know I’m a stranger but I definitely understand where you’re coming from.
I would really advise against Replika, they’ve shown some scummy business practices. It seems like kind of a nightmare in terms of taking advantage of vulnerable people. At the very least do some research on it before getting into it.
bin.pol.social
Ważne