A Real Whodunnit – (AI)’ll Tell You, That’s Who!

Listen to Article ( 3 minutes )
 

Pixis

Share Article

Do you remember sitting through presentations given by fellow students back in your school days? How many times have you caught the teacher sitting at their desk shaking their head at incorrect information being presented confidently by students who have probably scrambled to put the presentation together the night before? More than once probably. Now imagine those students are AI models and the teacher is a hustling gig worker with too much on their hands.

That’s precisely what has happened at the Swiss Federal Institute of Technology (EPFL). As an experiment to determine the degree to which crowdsourcing workers use AI, the institute employed 44 people from the crowd working platform Amazon Mechanical Turk for an abstract summarization task. They discovered up to 46% of the assignments received could have been completed using AI models such as ChatGPT. How did they find out? Using their own LLM content-detecting AI model of course!

The implication here is that it is common practice for AI companies to pass on data cleaning tasks that are used to train AI models, such as labeling, solving CAPTCHAs, or summarizing texts to people on crowd working platforms. If they use LLMs, which have been prone to generating inaccurate information, then those errors get passed on to the AI models they are training. And if those AI models are then used to train other models, the errors just multiply and suddenly you have a multitude of defective clone AI models all pointing at each other not knowing who started it.

AI Models learning the same errors in training data from each other.

With the use of LLMs becoming increasingly popular and with gig workers having to rely on high productivity for a sustainable income, the use of generative AI models may become a common practice. There is a danger for multiplication of errors in general use LLMs even if the output is not directly used to train AI. What if the abstract the experiment group delivered has false information and is published online? Other LLMs that use open sources for information would also generate the same mistakes.

Now we know you may be thinking, isn’t this part of why AI was created in the first place? To help increase productivity, write up creative copies, and automate mundane tasks such as summarization? Well yes, but not when it comes to training other AI models. That is not what AI is for!

Human generated data is essential to their training, and if we remove ourselves from that loop, we risk making reliable human generated data much harder to come by. A human in the loop is always vital for evaluating output so AI model’s can understand the output people want, and learn from that.

Soon we may make using specialized AI models to ensure work is human generated a standard practice, whether it be for written articles (feel free to run this article through one), artwork, or even school assignments. Otherwise, we too might as well have learned from other students’ scrappy presentations back in school.

How AI-led marketing can help you hit your goals?

Never miss an update

Thank You!

We have just sent you a confirmation email in your inbox.

Our AI platform experts will get in touch with you shortly.

Thank You!

We have just sent you a confirmation email in your inbox.

Our AI platform experts will get in touch with you shortly.