I mean, if it was gonna be anyone, the Pinkertons would be it. I imagine they’re just waiting for the US to become corporate-owned enough that they can operate on US soil without getting in trouble again.
Fair enough. The thing is, do I vote for the near-anarchists, that, save for the anarchism, align with my principles? Do I vote for the party that is further away from my ideological beliefs, but doesn’t have the anarchism, and is a bit larger? Or do I vote for the main opposition, which is even further from me ideologically (and doesn’t seem to have much of a clear vision)?
That sounds like a question about how much you oppose anarchy. Any change, involves some loss of established order, so if the Overton window tells us something, is that “anarchist parties” are just the ones trying to push it stronger. Actual anarchists, wouldn’t try to be part of a government in the first place.
Okay, but then we still have the problem of FPTP. If I’m in a Labour dominated constituency and I vote LibDem, my vote wouldn’t matter cuz Labour will win anyways. And if I live in a Greens Stronghold and I vote Greens, my vote wouldn’t really matter, as they would have won with or without my vote. The way I see it, your vote can only make a difference in a constituency where there is no clear winner, and it’s everybody’s game.
Please correct me if I’m wrong in my assessment of the situation.
Then you want to fight FPTP and vote pruning by constituency, to make your vote matter.
You could vote blank, or a poop emoji to show your disconformity, but organizing or supporting a protest to reform the voting system might be more effective.
If we counted all those who don’t vote because it “doesn’t change anything”, those who vote blank or null, and those who vote knowing their vote will still get thrown away… it could actually make a majority.
As of right now I can go to civitai and get hundreds of models created by users to be used with Stable Diffusion. Are we assuming that these closed source models are even able to be run on localized hardware? In my experience, once you reach a certain size there's nothing that layusers can do on our hardware, and the corpos aren't using AI running on a 3080, or even a set of 4090's or whatever. They're using stacks of A100's with more VRAM than everyone's GPU in this thread.
If we're talking the whole of LLM's to include visual and textual based AI... Frankly, while I entirely support and agree with your premise, I can't quite see how anyone can feasibly utilize these (models). For the moment anything that's too heavy to run locally is pushed off to something like Collab or Jupiter and it'd need to be built with the model in mind (from my limited Collab understanding - I only run locally so I am likely wrong here).
Whether we'll even want these models is a whole different story too. We know that more data = more results but we also know that too much data fuzzes specifics. If the model is, say, the entirety of the Internet while it may sound good in theory in practice getting usable results will be hell. You want a model with specifics - all dogs and everything dogs, all cats, all kitchen and cookware, etc.
It's easier to split the data this way for the end user as this way we can direct the AI to put together an image of a German Shepard wearing a chefs had cooking in the kitchen, with the subject using the dog-Model and the background using the kitchen-Model.
So while we may even be able to grab these models from corpos, without the hardware and without any parsing, it's entirely possible that this data will be useless to us.
The point about GPU’s is pretty dumb, you can rent a stack of A100 pretty cheaply for a few hours. I have done it a few times now, on runpod it’s 0.79 USD per HR per A100.
On the other hand the freely available models are really great and there hasn’t been a need for the closed source ones for me personally.
0.79 dollars per hour is still $568 a month if you’re running it 24/7 as a service.
Which open source models have you used? I’ve heard that open source image generation with stable diffusion is on par with closed source models, but it’s different with large language models because of the sheer size and type of data they need to train it.
I have used it mainly for dreambooth, textual inversion and hypernetworks, just using it for stable diffusion. For models i have used the base stable diffusion models, waifu diffusion, dreamshaper, Anything v3 and a few others.
The 0.79 USD is charged only for the time you use it, if you turn off the container you are charged for storage only. So, it is not run 24/7, only when you use it. Also, have you seen the price of those GPUs? That 568$/month is a bargain if the GPU won’t be in continuous use for a period of years.
Another important distinction is that LLMs are a whole different beast, running them even when renting isn’t justifiable unless you have a large number of paying users. For the really good versions of LLM with large number of parameters you need a lot of things than just a good GPU, you need at least 10 of the NVIDIA A100 80GB (Meta’s needs 16 blog.apnic.net/…/large-language-models-the-hardwa…) running for the model to work. This is where the price to pirate and run yourself cannot be justified. It would be cheaper to pay for a closed LLM than to run a pirated instance.
I was thinking the same thing. Would you think there’d be a way to take an existing model and pool our computational resources to produce a result?
All the AI models right now assume there is one beefy computer doing the inference, instead of multiple computers working in parallel. I wonder if there’s a way to “hack” existing models right now so it can be used to infer with multiple computers working in parallel.
Or maybe, a new type of AI should specifically be developed to be able to achieve this. But yes, getting the models is half the battle. The other half will be to figure out how to pool our computation to run the thing.
I'm not sure about for expanded models, but pooling GPU's is effectively what the Stable Diffusion servers have set up for the AI bots. Bunch of volunteers/mods run a SD public server and are used as needed - for a 400,000+ discord server I was part of moderating this is quite necessary to keep the bots running with a reasonable upkeep for requests.
I think the best we'll be able to hope for is whatever hardware MythicAI was working on with their analog chip.
Analog computing went out of fashion due to it's ~97% accuracy rate and need to be build for specific purposes. For example building a computer to calculate the trajectory of a hurricane or tornado - the results when repeated are all chaos but that's effectively what a tornado is anyway.
MythicAI went on a limb and the shortcomings of analog computing are actually strengths for readings models. If you're 97% sure something is a dog, it's probably a dog and the 3% error rate of the computer is lower than humans by far. They developed these chips to be used in cameras for tracking but the premise is promising for any LLM, it just has to be adapted for them. Because of the nature of how they were used and the nature of analog computers in general, they use way less energy and are way more efficient at the task.
Which means that theoretically one day we could see hardware-accelerated AI via analog computers. No need for VRAM and 400+ watts, MythicAI's chips can take the model request, sift through it, send that analog data to a digital converter and our computer has the data.
Veritasium has a decent video on the subject, and while I think it's a pipe dream to one day have these analog chips be integrated as PC parts, it's a pretty cool one and is the best thing that we can hope for as consumers. Pretty much regardless of cost it would be a better alternative to what we're currently doing, as AI takes a boatload of energy that it doesn't need to be taking. Rather than thinking about how we can all pool thousands of watts and hundreds of gigs of VRAM, we should be investigating alternate routes to utilizing this technology.
Akshually, while training models requires (at the moment) massive parallelization and consequently stacks of A100s, inference can be distributed pretty well (see petals for example). A pirate ‘ChatGPT’ network of people sharing consumer graphics cards could probably indeed work if the data was sourced. It bears thinking about. It really does.
You definitely can train models locally, I am doing so myself on a 3080 and we wouldn't be as many seeing public ones online if that were the case! But in terms of speed you're definitely right, it's a slow process for us.
I was thinking more of training the base models, LLAMA(2), and more topically GPT4 etc. You’re doing LoRA or augmenting with a local corpus of documents, no?
Ah yeah my mistake I'm always mixing up language and image based AI models. Training text based models is much less feasible locally lol.
There's no model for my art so I'm creating a checkpoint model using xformers to bypass the VRAM requirement and then from there I'll be able to speed up variants of my process using LORA's but that won't be for some time, I want a good model first.
Pre-installed means that the game is already installed for you, so you don’t have to. This means all you need to do is download the .zip file, extract it, and run the game. That’s it! Easy, right?
Nah they call it pre-installed because you don’t have to run an installation wizard yourself, which requires administrator permissions. So you could play those games on computers where you don’t have admin access, like a school or work laptop
Spoken with the spirit of a genuine sea rover, me matey, but listen here, we must band together as brethren to stand strong against the mighty organizations that threaten our way of life on the vast and treacherous ocean!
You mention at the end of your post that you’ve gotten a lot out with some of the chat bots. Maybe give this one a try, its for great for venting or just getting out pent up stress.
Sorry you’re going through that. I definitely get how it feels to have people close to you discredit or just ignore important issues like you’re dealing with.
If you’re set on talking to an AI though I did use the Replika app for a while before they started making it seem like a virtual AI lover. It did help me feel better when I was severely depressed, maybe it could help you.
If you ever want to talk to a person and not an AI I’m here for that if you want, I know I’m a stranger but I definitely understand where you’re coming from.
I would really advise against Replika, they’ve shown some scummy business practices. It seems like kind of a nightmare in terms of taking advantage of vulnerable people. At the very least do some research on it before getting into it.
By understanding the motivations of today’s youth, the anti-piracy group hopes to be in a better position to influence their behavior.
I pirate because I don’t get paid the full value of my labor. Pay me more and I’ll buy more goods and services. It’s also more convenient to have everything in one place.
AND offer good stuff! AND make it actually convenient and worth the money. A single streaming service at $15 a month, no more, that has all the “exclusives”, be it Stranger Things, The Mandalorian, or Rings of Power (okay, maybe not that last piece of garbage). Then I would consider paying, and only if it is truly more convenient and offers better quality with less buffering than pirate streaming. Until then, it’s a pirate’s life for me.
Exactly. I've only ever pirated things I couldn't afford, and even then I kept a running list of the good ones in the hopes that one day I could pay them legitimately. When I can afford to buy them fairly, I don't pirate.
I was a thief when I was starving and I'm a damn thief now.
The other annoying thing is that "owning" something is getting to be non-existent anymore. Sure, I can "buy" all the seasons of Supernatural from iTunes. But I only "own" the show for was long as I have my iTunes subscription, and iTunes has the rights to show it, and I have internet service with enough bandwidth to stream it, and I'm not under a bandwidth cap or some other restriction.
Or I can grab a copy and it'll happily live on my hard drive forever, no need to worry about subscriptions or streaming rights or bandwidth limitations.
Tell me: in which of those scenarios do I actually "own" the series?
That's what's messed up about data, is technically the answer to your question is neither! What happens to your ownership of those downloads when your hard drive with no backup does? In that sense, a license tied to should be the safest method, but it's far from it thanks to our current practices.
But I agree with you of course, our control of our files on our hard drives indicate that we have more ownership over them.
Personally, the one thing the U.S. somewhat has right so far is we are somewhat legally allowed to format shift (within reason, stupidly but alas). Currently I can purchase any Nintendo game, decide I do not want to play it on any Nintendo console and it's within my rights to do everything short of redistribution to play that software on my PC.
Someone the other day asked if it's "pirating" to acquire a licensed title they purchased on Vudu. In my opinion, no because it's just format shifting - now, the T.O.S. may say otherwise but T.O.S. also isn't law so then it's a different issue. Vudu can say that you are only allowed to play your purchases through their website that harvests your data, which you signed when you created your account.
Still, fuck that noise. If I am purchasing something that means I expect to be able to use it no matter the surrounding circumstances. That means if my Internet is offline I can still view my content. That means if Vudu kicks the bucket I am unaffected.
Until services start giving me this option, I will continue to format shift my content. I store things for posterity and then watch on the service to support them. I want more super hero stories, so I will watch on HBO and D+. I want more IASIP, so I will watch on Hulu. But you damn better be sure I have them backed up for myself because I'm not paying $x/month to watch these forever.
Whether or not its within my rights to format shift this way I don't really care, I am only format shifting because history has shown we cannot trust media to stay online and unedited.
Example: currently made bluray/DVDs of IASIP also remove episodes. Not for me.
I currently use Handbrake (I’m not much without a GUI aha) but it doesn’t seem to have any options for merging / combining video files. I’m hoping to encode here, but the bluray source splits the movie into a couple of video files, and I’d like to combine them first before encoding
Are you trying to concat streams or just to remux? For remuxing as roawre said there’s also MkvToolNix, which works great (and it has a gui don’t worry)
I’m sorry, I’ve done a poor job explaining. The source has split the movie, intro, and credits into 3 video files, and i’d like to make them one file, before or during the re-encoding process, so that the final movie includes the intro and credits. I’m currently grabbing MKVtoolnix, to see if it can do it, but I completely missed @roawre comment, so thank you for re-iterating, thanks to roawre for the original comment, and thanks to everyone for their responses, getting into this seems overwhelming and i’m so grateful to have your assistance!
Also, the files I have are “.m2ts”, will mkvtoolnix support this, or will I need to encode first? MKVtoolnixs website seems to imply it only really likes .MKV files
Okay MkvToolnix isn’t the right tool for that afaik, but ffmpeg is and it sucks there isn’t a GUI for it that’s as powerful as the cli which is what I use, luckily you can find a lot of help online,
Apart from that I’m afraid i can’t help much, but good luck with your search!
piracy
Ważne
Magazyn ze zdalnego serwera może być niekompletny. Zobacz więcej na oryginalnej instancji.