aldalire,

I was thinking the same thing. Would you think there’d be a way to take an existing model and pool our computational resources to produce a result?

All the AI models right now assume there is one beefy computer doing the inference, instead of multiple computers working in parallel. I wonder if there’s a way to “hack” existing models right now so it can be used to infer with multiple computers working in parallel.

Or maybe, a new type of AI should specifically be developed to be able to achieve this. But yes, getting the models is half the battle. The other half will be to figure out how to pool our computation to run the thing.

  • Wszystkie
  • Subskrybowane
  • Moderowane
  • Ulubione
  • Spoleczenstwo
  • rowery
  • esport
  • Pozytywnie
  • krakow
  • giereczkowo
  • Blogi
  • tech
  • niusy
  • sport
  • lieratura
  • Cyfryzacja
  • kino
  • piracy@lemmy.dbzer0.com
  • muzyka
  • LGBTQIAP
  • opowiadania
  • slask
  • Psychologia
  • motoryzacja
  • turystyka
  • MiddleEast
  • fediversum
  • zebynieucieklo
  • test1
  • Archiwum
  • FromSilesiaToPolesia
  • NomadOffgrid
  • m0biTech
  • Wszystkie magazyny