PHL Tech Magazine

Post: Pew Research finds a big problem with AI: People don’t trust it

coder_prem

coder_prem

Hi, I'm Prem. I'm professional WordPress Web Developer. I developed this website. And writing articles about Finance, Startup, Business, Marketing and Tech is my hobby.
Hope you will always get informative articles which will help you to startup your business.
If you need any kind of wordpress website then feel free to contact me at webexpertprem@gmail.com

Categories


As time goes by, people are becoming less trusting of artificial intelligence (AI), according to the results of a recent Pew Research study. The study, which involved 11,000 respondents, found that attitudes had changed sharply in just the past two years.

In 2021, 37% of respondents said they were more concerned than excited about AI. That number stayed pretty much the same last year (38%), but has now jumped to 52%. (The percentage of those who are excited about AI declined from 18% in 2021 to just 10% this year).

This is a problem because, to be effective, generative AI tools have to be trained, and during training there are a number of ways data can be compromised and corrupted. If people do not trust something, they’re not only unlikely to support it, but they are also more likely to act against it. Why would anyone support something they don’t trust?

This lack of trust could well slow down the evolution of generative AI, possibly leading to more tools and platforms that are corrupted and unable to do the tasks set out for them. Some of the issues appear to follow from users intentionally trying to undermine the technology, which only underscores the troubling behavior.

What makes generative AI unique

Generative AI learns from users. It might initially be trained with large language models (LLMs), but as more people use it, the tools learn from how people use it. This is meant to create a better human interface that can optimize communication with each user. It then takes this learning and spreads it across its instances, similar to how a child learns from its parents and then shares that knowledge with peers. This can create cascading problems if the information being provided is incorrect or biased.

The systems do seem to be able to handle infrequent mistakes and adjust to correct them, but if AI tools are intentionally misled, their ability to self-correct from that type of attack has so far proven inadequate. It’s unlike when an employee acts out and destroys their own work product; with AI, they could destroy the work of anyone using an AI tool once the misbehaving employee’s data is used to train other instances.

Copyright © 2023 IDG Communications, Inc.

Lora Helmin

Lora Helmin

Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Popular Posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.