piracy

Magazyn ze zdalnego serwera może być niekompletny. Zobacz więcej na oryginalnej instancji.

supervent, w got the disk space and the bandwidth to spare so

You could install a tor relay or i2p

salarua,
@salarua@sopuli.xyz avatar

my dream is to build my own NAS. it would handle everything i need: it would be a Nextcloud, media server, website host, Matrix server, Minecraft server, and when i’m not doing anything with it at the moment i’ll have it donate its time to seeding and relaying

UnRelatedBurner,

if your that passionate about NASs, may I ask how does one negate data loss if a lighting were to strike? or fire?

I get Raid an all that, but I don’t care how many times my data got burnt if it ever will.

Same with lightning, lightning rods are a thing, so maybe that? Idk what would be dmged if an entire lightning passes thru your house in a wire or not, like electromagnetic fields are a thing.

JackGreenEarth,

I suppose remote backup is the only option for something that destroys everything in the area, but raid is essential anyway.

UnRelatedBurner,

makes sense, I was hoping for a cheaper answer. Buying land (caz renting a server is the same as cloud storage isn’t it?) somewhere is probly expensive.

JackGreenEarth,

If you know someone who lives somewhere else and also has a NAS, you can help each other by using each other for remote backup.

UnRelatedBurner,

sadly I don’t, now I need to talk this onto someone… I don’t even know who’d be interested. But great idea, needs a lot of administrative work tho. And also leaving an open (pwd protected, but still an open port) connection to a storage server 24-7 does not sound very safe.

JackGreenEarth,

You could do a scheduled backup/sync every day/week, if you don’t want the port open always.

UnRelatedBurner,

good idea! thx

koper,

raid is essential anyway

Why? If there are offsite backups that can be restored in an acceptable time frame, what’s still the point of RAID?

lazyslacker,

I’d say it depends on your circumstances and your tolerance to the possibility of data loss. The general answer to the question is that without using some kind of redundancy, either mirrored disks or RAID, the failure of a single disk would mean you lose your data. This is true for each copy of your data that you have.

mwgreatest,

Better uptime and will get recent files that havent been put on your offsite backup

salarua,
@salarua@sopuli.xyz avatar

i’ll have to look more into that. the obvious answer is “keep it off site”, but that only applies if you’re doing backups. if it’s a NAS with several different purposes like the one i want, i’m not actually sure. i’ll keep reading about it

UnRelatedBurner,

Alright, good luck with that, and thx.

lazyslacker,

Off-site backup is the proper answer to your question. All this really depends on your own tolerance or comfort with the possibility of losing data. The rule of thumb is that there should be at least three different copies of your data, each in a different physical location. For each of them, there should be redundancy of some kind implemented to guard against hardware failure. Redundancy is typically achieved by using mirrored drives or by using RAID of some kind. Also, if you’d like to know, using RAID in which you can only lose one disk in the array is not typically considered a sufficient level of protection because of the possibility of a cascading drive failure during replacement of a failed disk. It should be at least two.

UnRelatedBurner,

“cascading drive failure” the what now? How do drives die in a domino effect?

three locations seem a bit much, but I totally understand it. Safe storage is tedious, huh.

cryptowillem,

Drives in a NAS age at about the same rate between them. If you had multiple drives around the same age or from the same manufacturing batch, there’s a higher chance they fail around the same age. After one disk in the array fails, you can insert a new drive and rebuild the array, but during the rebuild, all your drives are in heavier use than normal operation. If you only have one disk redundancy, you’re vulnerable until that rebuild is complete.

UnRelatedBurner,

oh wow, makes sense. It’s a very slim chance, but not zero. but doesn’t a three mirror setup has the same vulterability.

So if the scenario is that we bought two of the same type, use it equally, they’ll die at the same time. This sentance is also true if we up the number.

empireOfLove, w got the disk space and the bandwidth to spare so
@empireOfLove@lemmy.one avatar

Doing the Lord’s work. Godspeed.

toxictenement, w What torrents should I seed for yarrr
@toxictenement@lemmy.dbzer0.com avatar

Search for DVDRip, filter by upload date and check for ones that aren’t well seeded. See if you can find ones that don’t have any other rips available, and seed whatever looks at-risk. Godspeed!

fraydabson, w Radarr + Real-Debrid setup guide?

Not sure if you use docker but I found this - github.com/hyperbunny77/RealDebridManager

I really like real Debrid but my preferred method is using github.com/itsToggle/plex_debrid I’ve been using this for a while now and it’s great.

fungos,

plex_debrid looks the way to go with. I was put off it because its name and I use jellyfin. Reading more closely it works with jellyfin, so cutting off the middleman (radarr) seems like a very good solution!

Is there anything special to know about it before trying myself? Any issues or roadblocks you had when setting this up?

Thanks for the answer!

fraydabson,

Yeah I’ve tried it with both plex and Jellyfin. Plex has a more seamless experience with discovering new shows and movies and adding to wish list to see it available instantly. Though as I go more FOSS, I think it’s time to move back to Jellyfin. With Jellyseerr to find and request new media. It’s all pretty simple!

I use it with docker along with his fork of rclone with realdebrid support also in docker. Works great! I’d say my biggest annoyance is sometimes realdebrid gets a weird title of a show or movie which doesn’t mix well with how his rclone fork works.

His fork uses regex to parse names and move them into either a movie or tv show directory. If it can’t decide which one it belongs to it goes to a default. Plex handles this better than Jellyfin from what I remember. You would add tv and default to tv library and movie and default to movie library. Jellyfin would get confused and there’s no way to rename it or move it. You just need to find a different torrent. You can customize the regex and scraping profiles and even integrate it with torrentio.

but exciting news! on august 27 he announced the beta update for his rclone fork that will allow renaming/moving files and folders. As well as creating new folders and deleting parts of torrents instead of the whole thing. This is huge and will make operating it through Jellyfin much easier and prevent the issue I mentioned.

fungos,

Interesting. I installed here but I may be doing something wrong with my setup, because just using Jellyseerr is not triggering a RD download. First, using jellyseer required radarr anyway, the setup is like following:

  1. Jellyseer request a movie/show, put that request via radarr
  2. radarr will try search for a torrent via indexers (not working right now)
  3. Download via Black Hole Torrent, which is basically monitoring plex_debrid folders

Still, not working as things aren’t communicating with each other correctly and I didn’t set any indexers (or set jackett).

My setup feels wrong or too complex, can you give a bit more of details on yours? How the parts communicate? :)

fraydabson,

Hmmm. I don’t use radarr but this is how my workflow was for Jellyfin and jellyseerr

These are the steps from his GitHub but for my Jellyfin setup.

  1. Mount your real Debrid account via api token and rclone_RD. You know you did this right when you can browse the new realdebrid mounted directory and see all your shows and movies currently in your Debrid account.
  2. Setup Jellyfin as normal. Making sure to setup your libraries to use the Debrid mount. So tv shows and default for tv and movies and default for movies.
  3. Launch the plex Debrid main .py file and go through the configuration. Example:

First you choose a content service. Which for you would choose jellyseerr.

Next you need a library collection service (which might be the confusion) you do need to use either Trakt or Plex so that plex Debrid knows what you currently have in your library. Given you are doing a Jellyfin setup it’s prob best to use Trakt. Which means you need to hookup Trakt to your Jellyfin library so it knows what you already downloaded. If I’m remembering correctly this plugin is how I did this for Trakt + Jellyfin. github.com/jellyfin/jellyfin-plugin-trakt

So now when you add a request via jellyseerr plex Debrid will first scan Trakt library to see if you already have it. If it doesn’t find it then it will push your request directly into your Debrid account after scraping for the best torrent.

Next step in the plex Debrid setup is library update service. Which you can set to Jellyfin. So that once real Debrid caches your torrent it will force a refresh of your full Jellyfin library to scan for new content.

Then there’s a few optional steps I’ll explain below but last important step is for **Debrid services ** which is when you’ll tell plex Debrid what your real Debrid account is via api key.

So full workflow would be request tv show or movie via jellyseerr, which check Trakt if you have it already and if not pushes it to torrentio to find a torrent for your request. Once found it uses your Debrid api key to automatically load the torrent into real Debrid. It will wait for Debrid to finish downloading and once complete it will refresh your jelly fin library and then you can watch it

To clarify , for my identical setup you wouldn’t be using radarr or black hole. The Debrid python script takes care of that for you.

That’s really it. The rest is optional to configure. library ignore service you can use a local ignore list or a Trakt library or local file and it knows what you’ve watched and doesn’t try to get it again.

Next optional step is scraper services I usually leave this as default which will scrape using torrentio.

fungos,

Awesome and thanks a lot for putting the time to explain it like this.

So for some reason I got side tracked with radarr and didn’t see the need for trackt anywhere, but that seems the missing part on all this.

This also shows up that the Plex workflow is seamless (no Overseerr/jellyseerr need, no trackt need) than jellyfin right now.

Reading plex_debrid code, it seems it has some initial code on scanning current Jellyfin library, so finishing that code could remove the need of Trackt.

Now, one advantage of using Radarr ia that it will move and rename to a standad naming the incoming files, I think that only for this feature it is worth to keep it in the workflow.

So it seems like I’ll need to fix plex_debrid to understand existing Jellyfin library and remove the need of trackt!

Thanks a lot!

fraydabson,

Awesome stuff! If you do fork or PR for seamless Jellyfin integration let me know! That’d be awesome. I know he’s been super busy lately and hasn’t been able to update as much as he wants.

fraydabson,

Hey if you do decide to pursue finishing his Jellyfin library code you should definitely check out the discord. They have a channel for development.

Namely I saw a post from the dev back in February saying the reason he didn’t finish Jellyfin libraries, and still relies on Trakt, is because

Jellyfin doesn’t give out the IMDb ids of stored files easily.

fraydabson,

Oh and because I’m recently learning to move over to NixOS (which comes prebuilt with packages for Jellyfin and jellyseerr) it has the default rclone but I’m compiling a NixOS package for his fork that I’ll push to the main repo when it’s done!

Sharpiemarker,

Wow that’s living in the future!

fraydabson,

My thoughts exactly when I found it.

xspurnx, w Is It Farewell To The Internet Archive?

I can’t believe these are the times we live in. The services of the Internet Archive are invaluable for scholars and students alike. No library can afford all the printed books/journals or licenses needed for an adequate approach to most topics. And to be honest, shadow libraries are also much needed when publishers lock away vital knowledge (which was often gained through support from public grants).

This seems just another example of how capitalism will bring about the downfall of our civilization as it hinders the progress of science.

pelikan, w What are the best alternatives to The Pirate Bay in 2023?

rutracker or 1337x

Auli, w With PLEX blocking Hetzner Hosting, I'm thinking of Moving to Jellyfin, but I have some questions.

Jellyfin can’t do the same thing. Well they might be able maybe. Everyone logs on through plex servers and Plex has the IP address of all the servers. Jellyfin everything is local so no central servers to control who logins from where.

ace, w Can they even track pirated installs ?
@ace@lemmy.ananace.dev avatar

I love their response to (paraphrasing) “Are you going to do another Darth Vader and alter the deal on us in the future?” - “Oh yes, potentially every year.”

Malgas,

Is it just me, or does “we have a proprietary data model that calculates…” sound an awful lot like “we have no actual method of tracking that”?

ace,
@ace@lemmy.ananace.dev avatar

To me it sounds a lot like “We don’t really want to answer that question, so here’s a bit of technobabble to ease your mind.”

I mean, writing your own linked list in C and then summing its values could be considered as having “a proprietary data model that calculates”, but it has basically nothing to do with the question on how they track such things, just hints that they’re not using an existing - and proven - tracking method.

To clarify; they took the question “How are you tracking installs” to mean “With your tracking data, how are you counting installs”, and then basically answered “We add the numbers together”
This is a complete non-answer, and it seems to suggest that their actual tracking method is likely unreliable.

echodot,

What do you bet they have an actually figured that part out yet and were just hoping no one would ask, and then that they’d magically be able to come up with something.

seaturtle,

It sounds like bluffing.

In other words, it could very well be complete and utter bullshit.

Sharpiemarker, w Why WARP is OK for torrenting

What’s WARP?

Kekin,
@Kekin@lemy.lol avatar

A VPN from Cloudflare if I’m not mistaken, with a free tier.

kumu.io/sobeyharker/vpn-relationships#vpn-company…

1.1.1.1

I had forgotten about this. Looks neat still

Bakery7328,

Yes it’s this. I should have put a link.

Sharpiemarker,

Oh cool! Thanks for explaining.

fraydabson, w With PLEX blocking Hetzner Hosting, I'm thinking of Moving to Jellyfin, but I have some questions.

I feel like this could mean plex might do the same thing with real Debrid. Time to move over to Jellyfin. Good timing too since I just started using nix os

EddyBot, (edited ) w With PLEX blocking Hetzner Hosting, I'm thinking of Moving to Jellyfin, but I have some questions.

If Jellyfin would do such stupid thing, somebody would fork it to a new project
in fact this did already happened in the past: Jellyfin was forked of Emby after they changed their license

cooopsspace,

Jellyfin is unable to do that because they don’t have centralised auth like Plex does.

Appoxo,
@Appoxo@lemmy.dbzer0.com avatar

Not like it could be implemented.
But the the community would (as OP said) fork the project.

cooopsspace, (edited )

There’s absolutely zero way that is going to get pulled into the actual Jellyfin project, hence a fork is unnecessary.

It’s unreasonable to take responsibility for apps a user runs on their server.

But when you all of a sudden see a heap of Plex IP addresses hitting your provider with mass media sharing rings you’ve got problems.

Jellyfin however is just serving HTTP/S. Thats it. You can’t ban Nginx or Apache.

DrQuint, w Is It Farewell To The Internet Archive?

Uh zlib and sci hub are both still alive.

Pearlescence,

Sci-Hub stopped adding new articles since its court case and Z-Lib had most of its domains seized by the US. I didn’t say they were dead, but tried to convey that they were attacked and forced to either cease their operations or shrink significantly.

janguv,

Right but zlib is full strength at this point, and libgen remained unaffected. Annas archive gives an extensive coverage of it all.

princessnorah,
@princessnorah@lemmy.blahaj.zone avatar

Can you chuck me a link, can’t find the post.

oozynozh, (edited )
Appoxo,
@Appoxo@lemmy.dbzer0.com avatar

Not like theres a certain onion to connect to it…

Pssk, (edited ) w Which cracks and repacs works best on linux?
@Pssk@lemmy.ml avatar

You can find games like this one in rutracker which require no installation, decompression or dwarfs thing, only the files. Just like what you’d have after installing a repack

EuroNutellaMan,

Semi-related but do you happen to know where I can find the guide to rutracker?

Pssk, (edited )
@Pssk@lemmy.ml avatar

It is not fully private. You can browse without an account but cant use the site’s search. If you don’t want to create an account you can use a search engine like duckduckgo like adding site:rutracker.org to your query

EuroNutellaMan, (edited )

No I mean there was a guide on how to use rutracker or something

EDIT: Turns out it has been deleted, at least the one I found in the past

Draconic_NEO, w Can they even track pirated installs ?
@Draconic_NEO@lemmy.dbzer0.com avatar

Not really, they just go by if the game isn’t selling well, or rather isn’t selling well enough for them, obviously they have to be careful not to do it too aggressively otherwise otherwise they’ll come off as being greedy or whiny about poor sales, which isn’t a good look on any dev (especially if it’s not actually related to piracy, then it hurts their argument).

They’ve just been careful enough to only whip out the crybaby arguments when it’ll work in their favor and seem enough like piracy, as opposed to doing it too much or at the wrong time and seeming salty about low sales (to be fair that’s exactly what’s happening, but people think they know more about who buys vs who pirates, rather than who buys vs who doesn’t).

pjhenry1216, w Can they even track pirated installs ?

Will probably be enforced via licensing. Maybe even self reported. Probably has a clause giving them permission to perform audits of your sales.

CrypticCoffee,

I doubt they will spend that much time. Just state you owe us x. If you appeal, you have to proove sales from your different channels.

pjhenry1216,

There is no way they'll just make up a bunch of invoices for small developers. That would be too time consuming, plus they'd need to show reasonable effort in determining the invoice. It's best to just let the devs do all the work with the fear that an audit can cost them so much more money than they'd save if they lied.

CrypticCoffee,

They have telemetry. They probably know when a game is downloaded. They probably don’t know if it’s legitimate. They just auto bill based on telemetry and leave devs to dispute or suck the big one. Only effort needs to go into disputes. Big clients will obviously get quicker resolution.

No company would trust devs to be honest about downloads and it would be too expensive to verify.

They don’t need to audit much, just need a steam, epic, and itch total downloads figures.

pjhenry1216,

They'd have to do best effort against charging devs for pirated copies.

Telemetry is also easily blocked. As a business, I'd trust that a lot less. It's why many enterprise licenses are simply self reported. The punishment isn't worth lying enough to make a difference.

Most companies would trust devs as the devs are not big enough to survive a legal fight they'd certainly lose with prejudice, meaning they'd pay court costs as well.

  • Wszystkie
  • Subskrybowane
  • Moderowane
  • Ulubione
  • rowery
  • test1
  • Spoleczenstwo
  • lieratura
  • muzyka
  • piracy@lemmy.dbzer0.com
  • sport
  • Blogi
  • Technologia
  • Pozytywnie
  • nauka
  • FromSilesiaToPolesia
  • fediversum
  • motoryzacja
  • niusy
  • slask
  • informasi
  • Gaming
  • esport
  • Psychologia
  • tech
  • giereczkowo
  • ERP
  • krakow
  • antywykop
  • Cyfryzacja
  • zebynieucieklo
  • kino
  • warnersteve
  • Wszystkie magazyny