UK authorities have come up with new proposals to regulate artificial intelligence technologies to support businesses and increase public confidence.
According to the document, the new rules aim to eliminate future risks and clarify rights and opportunities for developers of AI systems. Regulations will also help consumers ensure that such technologies are safe, transparent and reliable.
The new approach focuses on six key principles to support growth and remove unnecessary barriers to business. Developers of AI systems must provide:
- the security of using AI;
- compliance with declared capabilities and quality of technology implementation;
- transparency of algorithms;
- fairness and impartiality;
- The existence of a legal entity responsible for AI decisions;
- Ways to repair damage caused by algorithms.
These principles will be interpreted and implemented by regulators such as Ofcom, the Competition and Markets Authority, the Office of the Information Commissioner and the Financial Conduct Authority.
According to digital minister Damian Collins, the government wants to expand job opportunities and protect people’s rights.
“It is critical that our rules are open to business, provide confidence to investors and increase public confidence. “Our flexible approach will help shape the future of artificial intelligence and strengthen our global position as a science and technology superpower,” he said.
Dame Wendy Hall, Deputy Chairman of the Artificial Intelligence Council, hailed the move to regulate AI.
“This is critical to fostering responsible innovation and enabling our AI ecosystem to thrive,” he said.
Hall added that the Council will continue to work with the government to develop technical documentation for AI systems.
Recall that in July the European Parliament passed a series of laws regulating tech giants in the field of privacy, algorithms and competition.
In February, US congressmen introduced rules to regulate discriminatory artificial intelligence.