As time goes by, people are becoming less trusting of artificial intelligence (AI), according to the results of a recent Pew Research study. The study, which involved 11,000 respondents, found that attitudes had changed sharply in just the past two years.
In 2021, 37% of respondents said they were more concerned than excited about AI. That number stayed pretty much the same last year (38%), but has now jumped to 52%. (The percentage of those who are excited about AI declined from 18% in 2021 to just 10% this year).
This is a problem because, to be effective, generative AI tools have to be trained, and during training there are a number of ways data can be compromised and corrupted. If people do not trust something, they’re not only unlikely to support it, but they are also more likely to act against it. Why would anyone support something they don’t trust?
This lack of trust could well slow down the evolution of generative AI, possibly leading to more tools and platforms that are corrupted and unable to do the tasks set out for them. Some of the issues appear to follow from users intentionally trying to undermine the technology, which only underscores the troubling behavior.
What makes generative AI unique
Generative AI learns from users. It might initially be trained with large language models (LLMs), but as more people use it, the tools learn from how people use it. This is meant to create a better human interface that can optimize communication with each user. It then takes this learning and spreads it across its instances, similar to how a child learns from its parents and then shares that knowledge with peers. This can create cascading problems if the information being provided is incorrect or biased.
The systems do seem to be able to handle infrequent mistakes and adjust to correct them, but if AI tools are intentionally misled, their ability to self-correct from that type of attack has so far proven inadequate. It’s unlike when an employee acts out and destroys their own work product; with AI, they could destroy the work of anyone using an AI tool once the misbehaving employee’s data is used to train other instances.
This suggests that an employee who undercuts genAI tools could do significant damage to their company beyond just the tasks they’re doing.
Why trust matters
People who are worried they will lose their job typically do not do a good job training another employee because they fear they might be terminated and replaced by the employee they trained. If those same people are asked to train AI tools and fear they’re being replaced, they could either refuse to do the training or damage it so it cannot replace them.
That’s where we are now. There is little in the media about how AI tools will help users have a better work/life balance, become more productive without doing more work, and (if properly trained) make fewer mistakes. Instead we get a regular litany of how AI will be taking jobs, how it’s throwing all kinds of errors and how it will be used to hurt people.
No wonder people are wary of it.
Change is hard (and risky)
IT types have long known that rolling out technology that users don’t like is tricky — because users will either avoid it or seek to break it. This is particularly troubling for a technology that will eventually impact every white- and some blue-collar jobs in a company. Positioning AI tools as employee aids, not employee replacements, and highlighting how those who properly use the technology are more likely to get ahead would go a long way to assuring they are on board with the technology and will it mature.
Forcing new technology on employees that don’t want it is always risky, and if workers believe it will cost them their job, forcing it on them can be a big mistake. Rolling out genAI right should include a significant effort to get employees excited about, and supportive of, the technology. Otherwise, not only will the deployment fall short of expectations, but it could end up doing substantial damage to the companies trying to embrace it.
Copyright © 2023 IDG Communications, Inc.