bin.pol.social

mang0bus, do piracy w Is movie-web down for anyone else?

Looks like it’s good here. Try another browser?

chloyster, do gaming w Sea of Stars (2023) review thread

Game looks so good. Really want to try this out!

JJROKCZ, do games w Why do modern strategy games hate the grid?

I do not miss the grid at all, I hate being conformed to grids instead of more fluid real movement. It’s just more immersive to order my troops to move as a real person could move, not slide on a rail and stand there in this open space like a chess piece

bighi,

Your comment doesn’t make sense. There’s no relation between a grid and standing out in the open. With free movement, if you order the character to finish their movement in the open, they’re going to be out in the open.

And I also don’t see the relation between grids and “sliding”.

Lazylazycat,

Yeah, I’m pretty sure in X-com there wasn’t any sliding? It was all very fluid movement, but you could easily see where you troops could move to.

nevemsenki,

Jagged Alliance 3 has grid, but the movements are more fluid than I’ve seen in a while. It’s all about polish and execution.

Cheems, do games w Why do modern strategy games hate the grid?
@Cheems@lemmy.world avatar

Oh I mean I never thought of it. I kinda like a grid. But I think that a grid would severely limit bg3

ryven, do games w Why do modern strategy games hate the grid?
@ryven@lemmy.dbzer0.com avatar

It’s not the lack of a grid specifically that bothers me in BG3, it’s that there are a lot of scenarios where in tabletop an enemy would be ruled to have cover, but in BG3 the shot is simply obstructed and your character needs to move before they can take it.

Also sometimes the automatic positioning for melee attacks is bad and will tell you that you can’t reach, but if you click to move and then click to attack you actually can.

Also the fact that AoE spells target the ground specifically instead of an arbitrary point in space, which means in some areas you get weird situations where the enemies are close enough together to fireball all of them but you can’t do it from your location because the spot where you need to place the fireball is in a slight depression that you can’t see into from where you are.

Also there is some weirdness about casting AoEs through doorways, where even if you can see someone that doesn’t mean you can fireball them because it’s treating the fireball “projectile” as being wider than I would expect, so that it can only go through at certain angles.

I do think a grid system would be less likely to have these issues, but they could be fixed without it.

canni, do piracy w Need help to make a super simple setup for my mom !

It’s a bit of work to set up, but plex/sonarr/radarr/jacket/bazaarr/overseerr/qbitorrent+openvpn is the way to go

ag_roberston_author, do gaming w Switch 2 - launch games?
!deleted4201 avatar

Mario Kart 8 Deluxe Ultimate

indite, do piracy w Easy and safe linux piracy with jc141
@indite@lemmy.dbzer0.com avatar

Stupid question, but how do I go about updating dwarFS?

FOSSFloss,

If the distro you’re on doesn’t have packages for a new version I would start with github.com/mhx/dwarfs#building-and-installing .

Yesat, do games w Why do modern strategy games hate the grid?
@Yesat@mastodon.content.town avatar

@anakin78z Also the DnD Grid kinda break when you put it in an actual 3D world. It work by convention on a TTRPG but the work around to do it are just not really sensible when you step away from the table. Diagonal movement, sphere, angled line,... All of that kinda gets more messy to apply if you are representing a 3D world.

anakin78z,
@anakin78z@lemmy.world avatar

Huh, I’m not sure I agree. It’s fairly straightforward to represent any volume as a 3D grid, and depending on how the game system does the math, it’s easy to count cells on any diagonal. I think the controls are a bit messy, but Solasta has a totally usable 3D grid for things like flying, and also shows how area effects like spheres or such affect surfaces on different levels.

gaael,

I’ve been playing Solasta the past few weeks, great game with a grid system that makes real good use of 3d and height :)

FullFridge, do games w Sea of Stars Review Thread | (90/100 OpenCritic)

Anyone know how long this game takes on average to beat? I have about two weeks of free time but then I won’t have much time for gaming for around 2 months so wondering if I can fit this in or not?

CosmicSploogeDrizzle,
@CosmicSploogeDrizzle@lemmy.world avatar

With only one report so far, the completionist time on howlongtobeat.com puts it around 26.5 hours.

howlongtobeat.com/game/76454

FullFridge,

Thanks for sharing the link!

Hmmm, maybe it’s possible but I think I’d rather not risk taking a two month break from the game. Guess I’ll need to wait till I have more free time

CoderKat,

You know, I realize I dunno who uploads their details to these kinda sites, but I’m glad people do. I consult HLTB a lot and it’s always been really useful for judging the time investment a game will take, how worthwhile DLCs will be, and for understanding what kind of game something is (longer is often better in my book, but not always, since games like AC Valhalla have actually gone too long, since I can’t help myself but to play mostly completionist).

Jagget, do piracy w What are the ways to play minecraft offline (single player)?

TLauncher is also an option

drifty,
@drifty@sopuli.xyz avatar

TLauncher has been known to have malware and crypto miners. Please do not use TLauncher

gregorjan,

Is there an alternative for simple modpack access? Nephew is using Tlauncher for easy mod switching

ollie,

its cli only but i go hard for github.com/gorilla-devs/ferium

sounddrill,

HMCL on github

upstream, do gaming w Switch 2 - launch games?

We don’t even know if it’s coming, but now we’re speculating what titles it will launch titles? Oo

quicken,

Slow news day. Plus we know from the Switch launch they don’t feel compelled to have a lot of games ready. It was breath of the wild, your typical launch party game and … nothing memorable.

Altomes, do gaming w Sea of Stars (2023) review thread

This is quite literally the first title I’ve ever bought at launch in my life and I’m thrilled to do it

Veraxus, do games w Sea of Stars Review Thread | (90/100 OpenCritic)
@Veraxus@kbin.social avatar

Huh. Never heard of this before.

But if I'm being honest, crappy indie pixel art games are a dime a dozen, so it takes a lot to overcome my general resistance to games like this. It can happen, though... Dave the Diver, Blasphemous... it's just a much steeper uphill battle for pixel art games than other styles.

wolfshadowheart, do piracy w Visions of a larger plunder
@wolfshadowheart@kbin.social avatar

Okay, I'm with you but...

how are we using these closed source models?

As of right now I can go to civitai and get hundreds of models created by users to be used with Stable Diffusion. Are we assuming that these closed source models are even able to be run on localized hardware? In my experience, once you reach a certain size there's nothing that layusers can do on our hardware, and the corpos aren't using AI running on a 3080, or even a set of 4090's or whatever. They're using stacks of A100's with more VRAM than everyone's GPU in this thread.

If we're talking the whole of LLM's to include visual and textual based AI... Frankly, while I entirely support and agree with your premise, I can't quite see how anyone can feasibly utilize these (models). For the moment anything that's too heavy to run locally is pushed off to something like Collab or Jupiter and it'd need to be built with the model in mind (from my limited Collab understanding - I only run locally so I am likely wrong here).

Whether we'll even want these models is a whole different story too. We know that more data = more results but we also know that too much data fuzzes specifics. If the model is, say, the entirety of the Internet while it may sound good in theory in practice getting usable results will be hell. You want a model with specifics - all dogs and everything dogs, all cats, all kitchen and cookware, etc.

It's easier to split the data this way for the end user as this way we can direct the AI to put together an image of a German Shepard wearing a chefs had cooking in the kitchen, with the subject using the dog-Model and the background using the kitchen-Model.

So while we may even be able to grab these models from corpos, without the hardware and without any parsing, it's entirely possible that this data will be useless to us.

beigeoat,
@beigeoat@110010.win avatar

The point about GPU’s is pretty dumb, you can rent a stack of A100 pretty cheaply for a few hours. I have done it a few times now, on runpod it’s 0.79 USD per HR per A100.

On the other hand the freely available models are really great and there hasn’t been a need for the closed source ones for me personally.

aldalire,

0.79 dollars per hour is still $568 a month if you’re running it 24/7 as a service.

Which open source models have you used? I’ve heard that open source image generation with stable diffusion is on par with closed source models, but it’s different with large language models because of the sheer size and type of data they need to train it.

beigeoat,
@beigeoat@110010.win avatar

I have used it mainly for dreambooth, textual inversion and hypernetworks, just using it for stable diffusion. For models i have used the base stable diffusion models, waifu diffusion, dreamshaper, Anything v3 and a few others.

The 0.79 USD is charged only for the time you use it, if you turn off the container you are charged for storage only. So, it is not run 24/7, only when you use it. Also, have you seen the price of those GPUs? That 568$/month is a bargain if the GPU won’t be in continuous use for a period of years.

Another important distinction is that LLMs are a whole different beast, running them even when renting isn’t justifiable unless you have a large number of paying users. For the really good versions of LLM with large number of parameters you need a lot of things than just a good GPU, you need at least 10 of the NVIDIA A100 80GB (Meta’s needs 16 blog.apnic.net/…/large-language-models-the-hardwa…) running for the model to work. This is where the price to pirate and run yourself cannot be justified. It would be cheaper to pay for a closed LLM than to run a pirated instance.

aldalire,

I was thinking the same thing. Would you think there’d be a way to take an existing model and pool our computational resources to produce a result?

All the AI models right now assume there is one beefy computer doing the inference, instead of multiple computers working in parallel. I wonder if there’s a way to “hack” existing models right now so it can be used to infer with multiple computers working in parallel.

Or maybe, a new type of AI should specifically be developed to be able to achieve this. But yes, getting the models is half the battle. The other half will be to figure out how to pool our computation to run the thing.

wolfshadowheart,
@wolfshadowheart@kbin.social avatar

I'm not sure about for expanded models, but pooling GPU's is effectively what the Stable Diffusion servers have set up for the AI bots. Bunch of volunteers/mods run a SD public server and are used as needed - for a 400,000+ discord server I was part of moderating this is quite necessary to keep the bots running with a reasonable upkeep for requests.

I think the best we'll be able to hope for is whatever hardware MythicAI was working on with their analog chip.

Analog computing went out of fashion due to it's ~97% accuracy rate and need to be build for specific purposes. For example building a computer to calculate the trajectory of a hurricane or tornado - the results when repeated are all chaos but that's effectively what a tornado is anyway.

MythicAI went on a limb and the shortcomings of analog computing are actually strengths for readings models. If you're 97% sure something is a dog, it's probably a dog and the 3% error rate of the computer is lower than humans by far. They developed these chips to be used in cameras for tracking but the premise is promising for any LLM, it just has to be adapted for them. Because of the nature of how they were used and the nature of analog computers in general, they use way less energy and are way more efficient at the task.

Which means that theoretically one day we could see hardware-accelerated AI via analog computers. No need for VRAM and 400+ watts, MythicAI's chips can take the model request, sift through it, send that analog data to a digital converter and our computer has the data.

Veritasium has a decent video on the subject, and while I think it's a pipe dream to one day have these analog chips be integrated as PC parts, it's a pretty cool one and is the best thing that we can hope for as consumers. Pretty much regardless of cost it would be a better alternative to what we're currently doing, as AI takes a boatload of energy that it doesn't need to be taking. Rather than thinking about how we can all pool thousands of watts and hundreds of gigs of VRAM, we should be investigating alternate routes to utilizing this technology.

rufus,

Fait point. But we’re talking about piracy here. Just steal it first and then let’s see if we can use it.

MalReynolds,
@MalReynolds@slrpnk.net avatar

Akshually, while training models requires (at the moment) massive parallelization and consequently stacks of A100s, inference can be distributed pretty well (see petals for example). A pirate ‘ChatGPT’ network of people sharing consumer graphics cards could probably indeed work if the data was sourced. It bears thinking about. It really does.

wolfshadowheart,
@wolfshadowheart@kbin.social avatar

You definitely can train models locally, I am doing so myself on a 3080 and we wouldn't be as many seeing public ones online if that were the case! But in terms of speed you're definitely right, it's a slow process for us.

MalReynolds,
@MalReynolds@slrpnk.net avatar

I was thinking more of training the base models, LLAMA(2), and more topically GPT4 etc. You’re doing LoRA or augmenting with a local corpus of documents, no?

wolfshadowheart,
@wolfshadowheart@kbin.social avatar

Ah yeah my mistake I'm always mixing up language and image based AI models. Training text based models is much less feasible locally lol.

There's no model for my art so I'm creating a checkpoint model using xformers to bypass the VRAM requirement and then from there I'll be able to speed up variants of my process using LORA's but that won't be for some time, I want a good model first.

MalReynolds,
@MalReynolds@slrpnk.net avatar

Fair cop, Godspeed!

  • Wszystkie
  • Subskrybowane
  • Moderowane
  • Ulubione
  • Spoleczenstwo
  • muzyka
  • lieratura
  • antywykop
  • giereczkowo
  • Psychologia
  • fediversum
  • motoryzacja
  • FromSilesiaToPolesia
  • Technologia
  • rowery
  • test1
  • Cyfryzacja
  • tech
  • Pozytywnie
  • Blogi
  • zebynieucieklo
  • krakow
  • niusy
  • sport
  • esport
  • slask
  • nauka
  • kino
  • LGBTQIAP
  • opowiadania
  • turystyka
  • MiddleEast
  • Wszystkie magazyny