Most mainstream artificial intelligence (AI) applications leverage the ability to process large amounts of data and discover patterns and trends within it. The results could help predict the future behavior of financial markets and city traffic and even help doctors diagnose diseases before symptoms appear.
But AI can also be used to invade the privacy of our online data, automate human work, and undermine democratic elections by flooding social media with misinformation. Algorithms can pick up biases from the real-world data used to improve them, which can lead to discrimination in hiring, for example.
AI regulation is a comprehensive set of rules that determine how this technology should be developed and used to address potential harm. Here are some of the main efforts to do this and how they differ.
EU Artificial Intelligence Law and Bletchley Declaration
The European Commission’s artificial intelligence law aims to reduce potential risks while encouraging entrepreneurship and innovation in artificial intelligence. The UK AI Security Institute, announced at the recent government summit at Bletchley Park, also aims to achieve this balance.
EU law bans AI tools deemed to pose unacceptable risks. This category includes “social assessment” and real-time facial recognition products, where people are classified based on their behavior.
The law also significantly restricts the next category below, high-risk AI. This mark covers practices that may adversely affect fundamental rights, including security.
Examples include autonomous driving and artificial intelligence recommendation systems used in recruiting, law enforcement, and education. Most of these vehicles will need to be registered in an EU database. The limited risk category includes chatbots such as ChatGPT or image generators such as Dall-E.
In general, AI developers will need to guarantee the confidentiality of all personal data used to “train” or improve their algorithms and be transparent about how their technology works. However, one of the major weaknesses of the law is that it was largely drafted by technocrats without broad public participation.
Unlike the AI Act, the recent Bletchley Declaration is not a regulatory framework per se but a call for its development through international cooperation. The 2023 AI Security Summit, where the declaration was adopted, was hailed as a diplomatic breakthrough as it forced the world’s political, business and scientific communities to agree on a joint plan replicating EU law.
USA and China
Companies from North America (especially the US) and China dominate the commercial AI landscape. Most of its European head offices are in the UK. The US and China are vying for a place in the regulatory arena. U.S. President Joe Biden recently issued an executive order requiring AI manufacturers to provide the federal government with evaluations to measure their applications’ vulnerability to cyberattacks, data used to train and test the AI, and measure its performance.
The US executive order creates incentives to encourage innovation and competition by attracting international talent. It involves creating training programs to improve AI skills among the U.S. workforce. It also provides public financing for partnerships between the government and private companies.
Risks such as discrimination caused by the use of artificial intelligence in hiring, mortgage applications and sentencing are being addressed by requiring US enforcement chiefs to issue guidance. It will define how federal officials should monitor the use of artificial intelligence in these areas.
China’s AI regulations pay close attention to generative AI and protecting against deepfakes (synthetically created images and videos that mimic the appearance and voice of real people but convey events that never occurred).
Great importance is also given to the regulation of artificial intelligence recommendation systems. This refers to algorithms that analyze people’s online activity to determine what content, including ads, to place at the top of their feed.
To protect the public from advice deemed false or emotionally damaging, Chinese laws ban fake news and prevent companies from using dynamic pricing (setting higher premiums for essential services based on mining personal data). It also requires that all automated decision-making processes be transparent to those affected.
The way forward
Regulatory efforts depend on national circumstances, such as U.S. concerns about cybersecurity, China’s strengthening of the private sector, and EU and U.K. efforts to balance innovation support with risk reduction. Frameworks around the world face similar challenges in their attempts to promote ethical, safe and trustworthy AI.
Some definitions of key terminology are vague and reflect input from a small group of influential stakeholders. In this process, the public was not adequately represented. Politicians should be wary of the significant political capital of tech companies. It is vital to include them in regulatory discussions, but it would be naive to trust these powerful self-policed lobbyists.
Artificial intelligence is entering the fabric of the economy, informing financial investments, supporting national health and social services, and influencing our entertainment choices. In other words, the person who establishes the dominant normative framework also has the ability to change the global balance of power.
Important problems remain unresolved. For example, when it comes to job automation, it is generally accepted that digital apprenticeships and other forms of retraining will transform the workforce into data scientists and AI programmers. However, many highly skilled people may not be interested in software development.
As the world tackles the risks and opportunities associated with artificial intelligence, we can take positive steps to ensure that this technology is developed and used responsibly. To support innovation, newly developed AI systems can be initially classified as high risk as defined by the EU Artificial Intelligence Law and downgraded to lower risk categories as their impact is investigated.
Policymakers can also learn from examples of highly regulated industries such as the pharmaceutical and nuclear industries. These are not direct analogues of AI, but many of the quality standards and operational procedures that govern these safety-critical areas of the economy can offer useful information.
Finally, collaboration between everyone affected by AI is essential. Rule-making should not be left to technocrats. The public needs to have a say in technology that can have a profound impact on their personal and professional lives. Source