wviana,
mexicancartel,

Try huggingchat from huggingface

huggingface.co/chat/

CyberBoy,

You mention at the end of your post that you’ve gotten a lot out with some of the chat bots. Maybe give this one a try, its for great for venting or just getting out pent up stress.

fire.place

Ganbat, (edited )

SNIP

CyberBoy,

Sorry you’re going through that. I definitely get how it feels to have people close to you discredit or just ignore important issues like you’re dealing with.

If you’re set on talking to an AI though I did use the Replika app for a while before they started making it seem like a virtual AI lover. It did help me feel better when I was severely depressed, maybe it could help you.

If you ever want to talk to a person and not an AI I’m here for that if you want, I know I’m a stranger but I definitely understand where you’re coming from.

melmi,
@melmi@lemmy.blahaj.zone avatar

I would really advise against Replika, they’ve shown some scummy business practices. It seems like kind of a nightmare in terms of taking advantage of vulnerable people. At the very least do some research on it before getting into it.

Sleepy_ash900,
@Sleepy_ash900@monyet.cc avatar

You can try Character.ai

nothacking,

Check out OpenAssistant, a free to use and open source LLM based assistant. You can even run it locally so no one else can see what your doing.

Ganbat,

I have an R9 380 that I’m never going to be able to replace. Local isn’t really an option.

theangriestbird,

My experience is with gpt4all (which also runs locally), but I believe the GPU doesn’t matter because you aren’t training the model yourself. You download a trained model and run it locally. The only cap they warn you about is RAM - you’ll want to run at least 16gb of RAM, and even then you might want to stick to a lighter model.

Ganbat,

No, LLM text generation is generally done on GPU, as that’s rhe only way to get any reasonable speed. That’s why there’s a specifically-made Pyg model for running on CPU. That said, one generation can take anywhere from five to twenty minutes on CPU. It’s moot anyway as I only have 8GB ram.

theangriestbird,

I’m just telling you, it ran fine on my laptop with no discrete GPU 🤷 RAM seemed to be the only limiting factor. But yeah if you’re stuck with 8GB, it would probably be rough. I mean it’s free, so you could always give it a shot? I think it might just use your page file, which would be slow but might still produce results?

  • Wszystkie
  • Subskrybowane
  • Moderowane
  • Ulubione
  • nauka
  • sport
  • Gaming
  • rowery
  • Psychologia
  • Blogi
  • piracy@lemmy.dbzer0.com
  • muzyka
  • FromSilesiaToPolesia
  • niusy
  • Spoleczenstwo
  • lieratura
  • slask
  • tech
  • giereczkowo
  • test1
  • informasi
  • ERP
  • fediversum
  • motoryzacja
  • Technologia
  • esport
  • krakow
  • antywykop
  • Cyfryzacja
  • Pozytywnie
  • zebynieucieklo
  • kino
  • warnersteve
  • Wszystkie magazyny