my dream is to build my own NAS. it would handle everything i need: it would be a Nextcloud, media server, website host, Matrix server, Minecraft server, and when i’m not doing anything with it at the moment i’ll have it donate its time to seeding and relaying
if your that passionate about NASs, may I ask how does one negate data loss if a lighting were to strike? or fire?
I get Raid an all that, but I don’t care how many times my data got burnt if it ever will.
Same with lightning, lightning rods are a thing, so maybe that? Idk what would be dmged if an entire lightning passes thru your house in a wire or not, like electromagnetic fields are a thing.
makes sense, I was hoping for a cheaper answer. Buying land (caz renting a server is the same as cloud storage isn’t it?) somewhere is probly expensive.
sadly I don’t, now I need to talk this onto someone… I don’t even know who’d be interested. But great idea, needs a lot of administrative work tho. And also leaving an open (pwd protected, but still an open port) connection to a storage server 24-7 does not sound very safe.
I’d say it depends on your circumstances and your tolerance to the possibility of data loss. The general answer to the question is that without using some kind of redundancy, either mirrored disks or RAID, the failure of a single disk would mean you lose your data. This is true for each copy of your data that you have.
i’ll have to look more into that. the obvious answer is “keep it off site”, but that only applies if you’re doing backups. if it’s a NAS with several different purposes like the one i want, i’m not actually sure. i’ll keep reading about it
Off-site backup is the proper answer to your question. All this really depends on your own tolerance or comfort with the possibility of losing data. The rule of thumb is that there should be at least three different copies of your data, each in a different physical location. For each of them, there should be redundancy of some kind implemented to guard against hardware failure. Redundancy is typically achieved by using mirrored drives or by using RAID of some kind. Also, if you’d like to know, using RAID in which you can only lose one disk in the array is not typically considered a sufficient level of protection because of the possibility of a cascading drive failure during replacement of a failed disk. It should be at least two.
Drives in a NAS age at about the same rate between them. If you had multiple drives around the same age or from the same manufacturing batch, there’s a higher chance they fail around the same age. After one disk in the array fails, you can insert a new drive and rebuild the array, but during the rebuild, all your drives are in heavier use than normal operation. If you only have one disk redundancy, you’re vulnerable until that rebuild is complete.
oh wow, makes sense. It’s a very slim chance, but not zero. but doesn’t a three mirror setup has the same vulterability.
So if the scenario is that we bought two of the same type, use it equally, they’ll die at the same time. This sentance is also true if we up the number.
Search for DVDRip, filter by upload date and check for ones that aren’t well seeded. See if you can find ones that don’t have any other rips available, and seed whatever looks at-risk. Godspeed!
plex_debrid looks the way to go with. I was put off it because its name and I use jellyfin. Reading more closely it works with jellyfin, so cutting off the middleman (radarr) seems like a very good solution!
Is there anything special to know about it before trying myself? Any issues or roadblocks you had when setting this up?
Yeah I’ve tried it with both plex and Jellyfin. Plex has a more seamless experience with discovering new shows and movies and adding to wish list to see it available instantly. Though as I go more FOSS, I think it’s time to move back to Jellyfin. With Jellyseerr to find and request new media. It’s all pretty simple!
I use it with docker along with his fork of rclone with realdebrid support also in docker. Works great! I’d say my biggest annoyance is sometimes realdebrid gets a weird title of a show or movie which doesn’t mix well with how his rclone fork works.
His fork uses regex to parse names and move them into either a movie or tv show directory. If it can’t decide which one it belongs to it goes to a default. Plex handles this better than Jellyfin from what I remember. You would add tv and default to tv library and movie and default to movie library. Jellyfin would get confused and there’s no way to rename it or move it. You just need to find a different torrent. You can customize the regex and scraping profiles and even integrate it with torrentio.
but exciting news! on august 27 he announced the beta update for his rclone fork that will allow renaming/moving files and folders. As well as creating new folders and deleting parts of torrents instead of the whole thing. This is huge and will make operating it through Jellyfin much easier and prevent the issue I mentioned.
Interesting. I installed here but I may be doing something wrong with my setup, because just using Jellyseerr is not triggering a RD download. First, using jellyseer required radarr anyway, the setup is like following:
Jellyseer request a movie/show, put that request via radarr
radarr will try search for a torrent via indexers (not working right now)
Download via Black Hole Torrent, which is basically monitoring plex_debrid folders
Still, not working as things aren’t communicating with each other correctly and I didn’t set any indexers (or set jackett).
My setup feels wrong or too complex, can you give a bit more of details on yours? How the parts communicate? :)
Hmmm. I don’t use radarr but this is how my workflow was for Jellyfin and jellyseerr
These are the steps from his GitHub but for my Jellyfin setup.
Mount your real Debrid account via api token and rclone_RD. You know you did this right when you can browse the new realdebrid mounted directory and see all your shows and movies currently in your Debrid account.
Setup Jellyfin as normal. Making sure to setup your libraries to use the Debrid mount. So tv shows and default for tv and movies and default for movies.
Launch the plex Debrid main .py file and go through the configuration. Example:
First you choose a content service. Which for you would choose jellyseerr.
Next you need a library collection service (which might be the confusion) you do need to use either Trakt or Plex so that plex Debrid knows what you currently have in your library. Given you are doing a Jellyfin setup it’s prob best to use Trakt. Which means you need to hookup Trakt to your Jellyfin library so it knows what you already downloaded. If I’m remembering correctly this plugin is how I did this for Trakt + Jellyfin. github.com/jellyfin/jellyfin-plugin-trakt
So now when you add a request via jellyseerr plex Debrid will first scan Trakt library to see if you already have it. If it doesn’t find it then it will push your request directly into your Debrid account after scraping for the best torrent.
Next step in the plex Debrid setup is library update service. Which you can set to Jellyfin. So that once real Debrid caches your torrent it will force a refresh of your full Jellyfin library to scan for new content.
Then there’s a few optional steps I’ll explain below but last important step is for **Debrid services ** which is when you’ll tell plex Debrid what your real Debrid account is via api key.
So full workflow would be request tv show or movie via jellyseerr, which check Trakt if you have it already and if not pushes it to torrentio to find a torrent for your request. Once found it uses your Debrid api key to automatically load the torrent into real Debrid. It will wait for Debrid to finish downloading and once complete it will refresh your jelly fin library and then you can watch it
To clarify , for my identical setup you wouldn’t be using radarr or black hole. The Debrid python script takes care of that for you.
That’s really it. The rest is optional to configure. library ignore service you can use a local ignore list or a Trakt library or local file and it knows what you’ve watched and doesn’t try to get it again.
Next optional step is scraper services I usually leave this as default which will scrape using torrentio.
Awesome and thanks a lot for putting the time to explain it like this.
So for some reason I got side tracked with radarr and didn’t see the need for trackt anywhere, but that seems the missing part on all this.
This also shows up that the Plex workflow is seamless (no Overseerr/jellyseerr need, no trackt need) than jellyfin right now.
Reading plex_debrid code, it seems it has some initial code on scanning current Jellyfin library, so finishing that code could remove the need of Trackt.
Now, one advantage of using Radarr ia that it will move and rename to a standad naming the incoming files, I think that only for this feature it is worth to keep it in the workflow.
So it seems like I’ll need to fix plex_debrid to understand existing Jellyfin library and remove the need of trackt!
Awesome stuff! If you do fork or PR for seamless Jellyfin integration let me know! That’d be awesome. I know he’s been super busy lately and hasn’t been able to update as much as he wants.
Oh and because I’m recently learning to move over to NixOS (which comes prebuilt with packages for Jellyfin and jellyseerr) it has the default rclone but I’m compiling a NixOS package for his fork that I’ll push to the main repo when it’s done!
I can’t believe these are the times we live in. The services of the Internet Archive are invaluable for scholars and students alike. No library can afford all the printed books/journals or licenses needed for an adequate approach to most topics. And to be honest, shadow libraries are also much needed when publishers lock away vital knowledge (which was often gained through support from public grants).
This seems just another example of how capitalism will bring about the downfall of our civilization as it hinders the progress of science.
Jellyfin can’t do the same thing. Well they might be able maybe. Everyone logs on through plex servers and Plex has the IP address of all the servers. Jellyfin everything is local so no central servers to control who logins from where.
I love their response to (paraphrasing) “Are you going to do another Darth Vader and alter the deal on us in the future?” - “Oh yes, potentially every year.”
To me it sounds a lot like “We don’t really want to answer that question, so here’s a bit of technobabble to ease your mind.”
I mean, writing your own linked list in C and then summing its values could be considered as having “a proprietary data model that calculates”, but it has basically nothing to do with the question on how they track such things, just hints that they’re not using an existing - and proven - tracking method.
To clarify; they took the question “How are you tracking installs” to mean “With your tracking data, how are you counting installs”, and then basically answered “We add the numbers together”
This is a complete non-answer, and it seems to suggest that their actual tracking method is likely unreliable.
What do you bet they have an actually figured that part out yet and were just hoping no one would ask, and then that they’d magically be able to come up with something.
I feel like this could mean plex might do the same thing with real Debrid. Time to move over to Jellyfin. Good timing too since I just started using nix os
If Jellyfin would do such stupid thing, somebody would fork it to a new project
in fact this did already happened in the past: Jellyfin was forked of Emby after they changed their license
Sci-Hub stopped adding new articles since its court case and Z-Lib had most of its domains seized by the US. I didn’t say they were dead, but tried to convey that they were attacked and forced to either cease their operations or shrink significantly.
You can find games like this one in rutracker which require no installation, decompression or dwarfs thing, only the files. Just like what you’d have after installing a repack
It is not fully private. You can browse without an account but cant use the site’s search. If you don’t want to create an account you can use a search engine like duckduckgo like adding site:rutracker.org to your query
Not really, they just go by if the game isn’t selling well, or rather isn’t selling well enough for them, obviously they have to be careful not to do it too aggressively otherwise otherwise they’ll come off as being greedy or whiny about poor sales, which isn’t a good look on any dev (especially if it’s not actually related to piracy, then it hurts their argument).
They’ve just been careful enough to only whip out the crybaby arguments when it’ll work in their favor and seem enough like piracy, as opposed to doing it too much or at the wrong time and seeming salty about low sales (to be fair that’s exactly what’s happening, but people think they know more about who buys vs who pirates, rather than who buys vs who doesn’t).
There is no way they'll just make up a bunch of invoices for small developers. That would be too time consuming, plus they'd need to show reasonable effort in determining the invoice. It's best to just let the devs do all the work with the fear that an audit can cost them so much more money than they'd save if they lied.
They have telemetry. They probably know when a game is downloaded. They probably don’t know if it’s legitimate. They just auto bill based on telemetry and leave devs to dispute or suck the big one. Only effort needs to go into disputes. Big clients will obviously get quicker resolution.
No company would trust devs to be honest about downloads and it would be too expensive to verify.
They don’t need to audit much, just need a steam, epic, and itch total downloads figures.
They'd have to do best effort against charging devs for pirated copies.
Telemetry is also easily blocked. As a business, I'd trust that a lot less. It's why many enterprise licenses are simply self reported. The punishment isn't worth lying enough to make a difference.
Most companies would trust devs as the devs are not big enough to survive a legal fight they'd certainly lose with prejudice, meaning they'd pay court costs as well.
piracy
Gorące
Magazyn ze zdalnego serwera może być niekompletny. Zobacz więcej na oryginalnej instancji.