PHL Tech Magazine

Post: UK regulator outlines AI foundation model principles, warns of potential harm

coder_prem

coder_prem

Hi, I'm Prem. I'm professional WordPress Web Developer. I developed this website. And writing articles about Finance, Startup, Business, Marketing and Tech is my hobby.
Hope you will always get informative articles which will help you to startup your business.
If you need any kind of wordpress website then feel free to contact me at webexpertprem@gmail.com

Categories


The UK’s Competition and Markets Authority (CMA) has warned about the potential risks of artificial intelligence in its newly published review into AI foundation models.

Foundation models are AI systems that have been trained on massive, unlabeled data sets. They underpin large language models — like OpenAI’s GPT-4 and Google’s PaLM — for generative AI applications like ChatGPT, and can be used for a wide range of tasks, such as translating text and analyzing medical images.

The new report proposes a number of principles to guide the ongoing development and use of foundation models, drawing on input from 70 stakeholders, including a range of developers, businesses, consumer and industry organizations, academics, and publicly available information.

The proposed principles are:

  • Accountability: AI foundation model developers and deployers are accountable for outputs provided to consumers.
  • Access: Ongoing ready access to key inputs, without unnecessary restrictions.
  • Diversity: Sustained diversity of business models, including both open and closed.
  • Choice: Sufficient choice for businesses so they can decide how to use foundation models.
  • Flexibility: Having the flexibility to switch and/or use multiple foundation models according to need.
  • Fair dealing: No anticompetitive conduct including self-preferencing, tying or bundling.
  • Transparency: Consumers and businesses are given information about the risks and limitations of foundation model-generated content so they can make informed choices.

Poorly developed AI models could lead to societal harm

While the CMA report highlights how people and businesses stand to benefit from correctly implemented and well developed foundation models, it cautioned that if competition is weak or AI developers fail to comply with consumer protection laws, it  could lead to societal harm. Examples given include citizens being exposed to “significant levels” of false and misleading information and AI-enabled fraud.

The CMA also warned that in the longer term, market dominance from a small number of firms could lead to anticompetition concerns, with established players using foundation models to entrench their position and deliver overpriced or poor quality products and services.

Copyright © 2023 IDG Communications, Inc.

Lora Helmin

Lora Helmin

Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Popular Posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.