bin.pol.social

Pulp, do piracy w How do the Debrid services get away from copyright?

I assume they still handle DMCA notices. So nothing wrong legally

kratoz29,
@kratoz29@lemm.ee avatar

What do you mean by “handle”?

Apollo2323,

So since Real Debris just like YouTube its a legal service , if you upload a movie to YouTube and a company send them a notice to take down that movie , YouTube will take it takedown and company happy :) the same is with Real Debrid.

kratoz29,
@kratoz29@lemm.ee avatar

But man, the scale should be way lesser then, I mean, that have never impacted me to watch stuff, either if it is old or new content.

Yerbouti, do gaming w Looking for games with unique core mechanics

Outer Wilds is amazing and the mechanic is unique.

tomatobeard, do gaming w What's the funniest game you've played?

No One Lives Forever though it’s ancient now. I remember sneaking around just to listen to the bad guys talking to each other.

10982302,

Really wish GOG could could release NOLF1+2 -- I'd love to have them in my library.

Hubi,

NOLF is stuck in license limbo as far as I know. You can get it here for free: nolfrevival.tk

jackie_jormp_jomp,

Wonderful, thank you

boonhet,

I remember a long discussion about correlation vs causation re being a criminal and drinking beer

Fucking love the game, will have to play again.

Pulp, do piracy w This file has 16 detections, is it safe to install it?

Find another source

Aresff,

I searched the entire list of “Android Cracked/Modded App Markets & Repos” but unfortunately no other site has this 1.3.2 modded version.

ForbiddenRoot, do gaming w hardware: use TV as a monitor?

Is this a smart idea?

For Roblox and Minecraft, a TV should be perfectly fine and in fact excellent. I will go out on a limb here and say that even for most ‘real’ games a TV is fine. The latency associated with TVs is most noticeable in FPS games. For other genres like strategy, third-person adventure games etc, I do not think it matters as much if at all. Many people, especially those who have not used a low response / gaming monitor, do not even notice a lag at all (Note: You will find many such people in real life but never ever on the internet). It would be nice of course if your TV had a “Game Mode” which lowers latency, but it may not necessarily be there in a 10-year-old TV (though it was not that uncommon even back then, so do look for it in your TV settings).

Regarding programming on the TV, I think the situation is slightly different. Using small text in general doesn’t work for me at all on a TV. Most TVs, other than OLEDs or recent non-OLED ones, don’t seem to handle text well enough in my experience. There’s either ghosting or some other manner of artifacts which makes the text harder to read compared to a monitor (apart from the distance from TV involved). I commonly see this issue even with office televisions used for mirroring laptop output. Maybe playing around with sharpening and other settings might get it to work well enough though and it really depends on the specific TV in question.

Overall, I feel you should be fine, at least for gaming, but probably for programming as well. I have a couple of gaming rigs hooked up to my living room and bedroom TV’s and I quite enjoy gaming on them. The much larger screens and ability to lounge about while gaming more than make up for any perceived or actual lag for me.

I hope your kid and you have a great time with your new setup. Have fun! :)

Pulp, do piracy w Re-Encode Advice?

I recommend using H.265 and Opus for audio. In my opinion, encoding to H.264 in 2023 is not a wise choice. AV1 is a good option, especially with hardware encoding and compatible devices.

poudi8,
@poudi8@reddthat.com avatar

Also, software enfolding only, hardware encoding really isn’t as good.

Pulp,

I would say good enough for personal use unless you have good cpu

dingus, (edited ) do piracy w Looking for content related to acting or acting classes. Any chance someone here has good video sources?
@dingus@lemmy.ml avatar

Not acting per se… but the private tracker I am on has every video from MasterClass.

There are several acting focused videos in that set.

d3Xt3r, do gaming w I watched 2 hours of starfield gameplay and an hour of review

I actually like exploring the universe, but I’ve been pretty disappointed from what I’ve seen so far. They tried to add space-sim elements to it, but did a half-assed job at it. To make things worse, the planets are mostly barren and not worth exploring either.

In saying that, it is a Bethesda gene, so I’m expecting some beefy mods that add more content and immersiveness to the game, and once that’s done, I may consider buying it when it goes on sale.

In the meantime, I’m really looking forward to finally playing Cyberpunk as it was meant to be, with the new Phantom Liberty DLC.

escapesamsara, do piracy w Sanity Check Setup

That’s more secure than most setups, the VPN with killswitch will defeat any and all attacks you’re likely to encounter if you don’t open files on that same VM.

Lettuceeatlettuce,
@Lettuceeatlettuce@lemmy.ml avatar

Awesome, ty!

iHUNTcriminals, do piracy w So how fast do y'all think Starfield will get cracked when the early access goes live tonight?

I just finished it yesterday.

(JK don’t hunt me down and kill me bethnezseanda)

glennglog22,
@glennglog22@kbin.social avatar

Too late, the pinkertons are on their way.

AphoticDev,
@AphoticDev@lemmy.dbzer0.com avatar

I mean, if it was gonna be anyone, the Pinkertons would be it. I imagine they’re just waiting for the US to become corporate-owned enough that they can operate on US soil without getting in trouble again.

Policeshootout, do gaming w Starfield Review Thread

Thanks for doing this!

Glide, do gaming w What's wrong with the Saints Row reboot again?

Weird question, but why did you buy a game expecting to hate it?

pipariturbiini,
HawlSera,
  1. To say I had all the Saint’s Row games (Except 1 because it’s not on Steam)
  2. Tbf, I didn’t really buy it. My BF randomly surprised me with this and BG3.
Sharpiemarker, do piracy w Multiplayer with DLC unlocker

Steam is about the only platform I wouldn’t mess with. I’ve spent way too much money on my library for Steam to ban my account.

wolfshadowheart,
@wolfshadowheart@kbin.social avatar

I think they only do overwatch and VAC bans at first, no?

smegger, do piracy w Newbie who wants to get into Uploading, but it confused about everything around video encoding

For the file name it contains the basic information about the file.

Batman begins =title

2005 = release year of the movie

1080p = video resolution

BluRay =source

The rest is encoding and video/audio information about the file

Framestor =release group name

It’s been forever since I last ripped a video, but I remember videohelp.com being a decent source of information

wolfshadowheart, do piracy w Visions of a larger plunder
@wolfshadowheart@kbin.social avatar

Okay, I'm with you but...

how are we using these closed source models?

As of right now I can go to civitai and get hundreds of models created by users to be used with Stable Diffusion. Are we assuming that these closed source models are even able to be run on localized hardware? In my experience, once you reach a certain size there's nothing that layusers can do on our hardware, and the corpos aren't using AI running on a 3080, or even a set of 4090's or whatever. They're using stacks of A100's with more VRAM than everyone's GPU in this thread.

If we're talking the whole of LLM's to include visual and textual based AI... Frankly, while I entirely support and agree with your premise, I can't quite see how anyone can feasibly utilize these (models). For the moment anything that's too heavy to run locally is pushed off to something like Collab or Jupiter and it'd need to be built with the model in mind (from my limited Collab understanding - I only run locally so I am likely wrong here).

Whether we'll even want these models is a whole different story too. We know that more data = more results but we also know that too much data fuzzes specifics. If the model is, say, the entirety of the Internet while it may sound good in theory in practice getting usable results will be hell. You want a model with specifics - all dogs and everything dogs, all cats, all kitchen and cookware, etc.

It's easier to split the data this way for the end user as this way we can direct the AI to put together an image of a German Shepard wearing a chefs had cooking in the kitchen, with the subject using the dog-Model and the background using the kitchen-Model.

So while we may even be able to grab these models from corpos, without the hardware and without any parsing, it's entirely possible that this data will be useless to us.

beigeoat,
@beigeoat@110010.win avatar

The point about GPU’s is pretty dumb, you can rent a stack of A100 pretty cheaply for a few hours. I have done it a few times now, on runpod it’s 0.79 USD per HR per A100.

On the other hand the freely available models are really great and there hasn’t been a need for the closed source ones for me personally.

aldalire,

0.79 dollars per hour is still $568 a month if you’re running it 24/7 as a service.

Which open source models have you used? I’ve heard that open source image generation with stable diffusion is on par with closed source models, but it’s different with large language models because of the sheer size and type of data they need to train it.

beigeoat,
@beigeoat@110010.win avatar

I have used it mainly for dreambooth, textual inversion and hypernetworks, just using it for stable diffusion. For models i have used the base stable diffusion models, waifu diffusion, dreamshaper, Anything v3 and a few others.

The 0.79 USD is charged only for the time you use it, if you turn off the container you are charged for storage only. So, it is not run 24/7, only when you use it. Also, have you seen the price of those GPUs? That 568$/month is a bargain if the GPU won’t be in continuous use for a period of years.

Another important distinction is that LLMs are a whole different beast, running them even when renting isn’t justifiable unless you have a large number of paying users. For the really good versions of LLM with large number of parameters you need a lot of things than just a good GPU, you need at least 10 of the NVIDIA A100 80GB (Meta’s needs 16 blog.apnic.net/…/large-language-models-the-hardwa…) running for the model to work. This is where the price to pirate and run yourself cannot be justified. It would be cheaper to pay for a closed LLM than to run a pirated instance.

aldalire,

I was thinking the same thing. Would you think there’d be a way to take an existing model and pool our computational resources to produce a result?

All the AI models right now assume there is one beefy computer doing the inference, instead of multiple computers working in parallel. I wonder if there’s a way to “hack” existing models right now so it can be used to infer with multiple computers working in parallel.

Or maybe, a new type of AI should specifically be developed to be able to achieve this. But yes, getting the models is half the battle. The other half will be to figure out how to pool our computation to run the thing.

wolfshadowheart,
@wolfshadowheart@kbin.social avatar

I'm not sure about for expanded models, but pooling GPU's is effectively what the Stable Diffusion servers have set up for the AI bots. Bunch of volunteers/mods run a SD public server and are used as needed - for a 400,000+ discord server I was part of moderating this is quite necessary to keep the bots running with a reasonable upkeep for requests.

I think the best we'll be able to hope for is whatever hardware MythicAI was working on with their analog chip.

Analog computing went out of fashion due to it's ~97% accuracy rate and need to be build for specific purposes. For example building a computer to calculate the trajectory of a hurricane or tornado - the results when repeated are all chaos but that's effectively what a tornado is anyway.

MythicAI went on a limb and the shortcomings of analog computing are actually strengths for readings models. If you're 97% sure something is a dog, it's probably a dog and the 3% error rate of the computer is lower than humans by far. They developed these chips to be used in cameras for tracking but the premise is promising for any LLM, it just has to be adapted for them. Because of the nature of how they were used and the nature of analog computers in general, they use way less energy and are way more efficient at the task.

Which means that theoretically one day we could see hardware-accelerated AI via analog computers. No need for VRAM and 400+ watts, MythicAI's chips can take the model request, sift through it, send that analog data to a digital converter and our computer has the data.

Veritasium has a decent video on the subject, and while I think it's a pipe dream to one day have these analog chips be integrated as PC parts, it's a pretty cool one and is the best thing that we can hope for as consumers. Pretty much regardless of cost it would be a better alternative to what we're currently doing, as AI takes a boatload of energy that it doesn't need to be taking. Rather than thinking about how we can all pool thousands of watts and hundreds of gigs of VRAM, we should be investigating alternate routes to utilizing this technology.

rufus,

Fait point. But we’re talking about piracy here. Just steal it first and then let’s see if we can use it.

MalReynolds,
@MalReynolds@slrpnk.net avatar

Akshually, while training models requires (at the moment) massive parallelization and consequently stacks of A100s, inference can be distributed pretty well (see petals for example). A pirate ‘ChatGPT’ network of people sharing consumer graphics cards could probably indeed work if the data was sourced. It bears thinking about. It really does.

wolfshadowheart,
@wolfshadowheart@kbin.social avatar

You definitely can train models locally, I am doing so myself on a 3080 and we wouldn't be as many seeing public ones online if that were the case! But in terms of speed you're definitely right, it's a slow process for us.

MalReynolds,
@MalReynolds@slrpnk.net avatar

I was thinking more of training the base models, LLAMA(2), and more topically GPT4 etc. You’re doing LoRA or augmenting with a local corpus of documents, no?

wolfshadowheart,
@wolfshadowheart@kbin.social avatar

Ah yeah my mistake I'm always mixing up language and image based AI models. Training text based models is much less feasible locally lol.

There's no model for my art so I'm creating a checkpoint model using xformers to bypass the VRAM requirement and then from there I'll be able to speed up variants of my process using LORA's but that won't be for some time, I want a good model first.

MalReynolds,
@MalReynolds@slrpnk.net avatar

Fair cop, Godspeed!

  • Wszystkie
  • Subskrybowane
  • Moderowane
  • Ulubione
  • muzyka
  • lieratura
  • antywykop
  • giereczkowo
  • Psychologia
  • Spoleczenstwo
  • fediversum
  • motoryzacja
  • FromSilesiaToPolesia
  • Technologia
  • rowery
  • test1
  • Cyfryzacja
  • tech
  • Pozytywnie
  • Blogi
  • zebynieucieklo
  • krakow
  • niusy
  • sport
  • esport
  • slask
  • nauka
  • kino
  • LGBTQIAP
  • opowiadania
  • turystyka
  • MiddleEast
  • Wszystkie magazyny