And yet when libgen was broken a couple of days ago it sent me to the broken libgen for the (admittedly obscure) thing I was after. Perhaps caching I dunno. Still, glad it’s there…
It has to be executed to have any danger, so you’d need a zero day exploit for your media player, even then it should be contained at user level rather than system. I’ve not really heard of it happening, but it’s theoretically possible I guess, would take a really bad coding mistake. Keep your players updated and you should be fine.
Consider containers. Gluetun makes it easy to establish a wireguard connection to Nord, then use qbittorrent docker on the network that glutun provides, same for all your *arrs. Safer, faster, self-contained. Connect your web-browser to gluetun’s proxy. Just sayin’
I was thinking more of training the base models, LLAMA(2), and more topically GPT4 etc. You’re doing LoRA or augmenting with a local corpus of documents, no?
Akshually, while training models requires (at the moment) massive parallelization and consequently stacks of A100s, inference can be distributed pretty well (see petals for example). A pirate ‘ChatGPT’ network of people sharing consumer graphics cards could probably indeed work if the data was sourced. It bears thinking about. It really does.