Nioxic,

Can we be sure? An ai only knows stuff its trained on?

lvxferre,
@lvxferre@mander.xyz avatar

I remember Angela Collier talking about this topic, but basically: the “AI” in question is a different beast from the “AI” in chatbots and image generators. The underlying tech is the same (artificial neural networks), but instead of making the bot mimic human output, you’re asking it to point out stuff.

So for example, you feed it with two sets of data:

  1. a bunch of pics of completely normal astronomical objects
  2. a bunch of pics of anomalous astronomical objects

Then you “ask” the bot to assign new pictures (not present in either set) to one of those sets.

In my opinion it’s one of the best ways to use the new tech. If there’s a false positive, nobody is harmed — the researcher will simply investigate the pic, see there’s nothing worth noting there, say “dumb clanker”, and move on. Ideally you don’t want false negatives, but if they do happen, you’re missing things you’d already miss anyway — because there’s no way people would trial down all those pics by hand.

It also skips a few issues associated with chatbots and image generators, like:

  • since it’s “trained” for a specific purpose, it isn’t DDoSing sites for training “data”. It’s all from the telescope, AFAIK in the public domain.
  • no massive training = no massive water/energy cost.
  • no concerns related to authorship or whatever.
kalkulat,
@kalkulat@lemmy.world avatar

Richard Stallman sez we should call it ‘PI’ … pretend intelligence.

  • Wszystkie
  • Subskrybowane
  • Moderowane
  • Ulubione
  • astronomy@mander.xyz
  • krakow
  • test1
  • NomadOffgrid
  • Gaming
  • muzyka
  • fediversum
  • esport
  • FromSilesiaToPolesia
  • tech
  • Cyfryzacja
  • warnersteve
  • rowery
  • healthcare
  • m0biTech
  • Psychologia
  • Technologia
  • niusy
  • MiddleEast
  • ERP
  • Spoleczenstwo
  • sport
  • informasi
  • turystyka
  • Blogi
  • shophiajons
  • retro
  • Travel
  • Radiant
  • Wszystkie magazyny