Even though artificial intelligence brings immense benefits in all areas, it also brings new risks. Hence, the question that arises here is whether it would be possible to regulate AI and what the role of the government should be.
The rapid advancement in this field calls for urgent regulation, and certain countries are planning to do so since it complicates the efforts put in by the government to agree to the laws that govern the use of the technology.
Henceforth, the government has taken a step forward to develop specific regulations and guidelines to ensure that AI is used responsibly and ethically.
AI summit marked a positive beginning, but a comprehensive global agreement remains elusive
The recent AI summit that was held in London marked a significant step toward establishing global dialogue and collaboration on the responsible development and usage of AI.
UK Prime Minister Rishi Sunak played a pivotal role in fostering international cooperation on AI by acting as a host in the AI Safety Summit.
While speaking with Rishi Sunak, Elon Musk stated, “It will get to the point where you’ve got open-source AI that will start to approach human-level intelligence, or perhaps exceed it. I don’t know quite what to do about it.”
The summit yielded several positive outcomes, including the signing of the Bletchley Declaration by 28 nations, along with the announcement of new AI safety institutes by the UK and US. However, disagreements still hovered over exactly how to contain and regulate AI.
The Bletchley Declaration and the AI Safety Summit promise to navigate the complex challenges and establish a structure to promote its responsible and beneficial use.
Here is a brief overview of the current state of AI regulation in some countries:
The regulation for AI will vary from one country to another. Still, as part of the digital strategy, certain countries like China, the European Union, and Brazil have already drafted legislation to regulate AI in their countries and have taken a more active approach to regulating AI.
However, the US is hesitant regarding regulating AI.
With the regulation of this vast AI area, it is challenging to keep track of this fast development, but knowing exactly where you stand when regulating AI is essential.
China is the first country to adopt a national AI strategy. Its AI regulations are mainly focused on promoting AI development while mitigating or excluding its potential risks.
China has progressed faster, with laws regulating AI coming into effect on March 1, 2022. The set of measures used by this country was effective from August 15, mainly for managing the generative AI industry, which requires service providers to submit security assessments and also needs clearance just before the release of mass-market AI products.
For instance, China has banned facial recognition tools in public places.
The UK government has not released any legal framework but laid out a 10-year national AI strategy for developing the technology within its borders.
The first significant step that it took was to create a roadmap to an effective AI ecosystem. Also, a new AI standard hub was announced in January 2022. Still, since all these are in the early stage, the efforts symbolize that the UK is serious about establishing itself as a global authority on AI.
In April, Spain’s unique data protection agency said it was about to launch a preliminary investigation into potential data breaches by ChatGPT.
It also talked with the EU’s privacy watchdog to evaluate the privacy concerns surrounding ChatGPT.
Presently, there needs to be regulation at the federal level in the United States. Still, there has been a lot of activity across various government departments aiming to address the concerns around AI. As stated earlier in this piece, the United States is a bit reluctant to regulate AI, but of late, there has been a growing movement in the US for the sake of developing AI regulations.
The US Federal Trade Commission FTC) conducted an expansive investigation on AI, claiming that it has run out of consumer protection laws by putting personal reputations and data at risk.
Japan will likely introduce specific regulations by the end of 2023 that are similar to the US attitude more than the strict ones planned in the European Union.
Also, in June, the country’s privacy watchdog sent a warning to OpenAI that dictates not to collect sensitive data without people’s permission and to minimize the sensitive data it collects.
This country has taken many measures to regulate AI, including the General Data Protection Regulation (GDPR) that started in 2018. This is known to provide much more control over personal data usage and also sets strict limits on how to use it.
This is one of the first countries to draft regulations governing AI development and usage.
The lawmakers here in the UN agreed to change a draft of the bloc’s AI Act, but before that, the lawmakers will have to put out the details with other European countries just before the draft rules become legislation.
The main issue here is facial recognition and biometric surveillance, which lawmakers intend to ban completely. In contrast, the EU countries need an exception for national security, defense, and military purposes.
It is also working on the new AI regulation that is expected to be introduced in 2023. The European Union AI Act has been specifically designed to evolve with the ever-changing nature of AI technology.
Israel has worked on AI regulations for the last 18 months or more to achieve the right balance between innovation and preserving human rights and civic safeguards.
It has also published a 115-page draft AI policy in October and is collecting public feedback ahead of a final decision.
Italy has a data protection authority known to review other artificial intelligence platforms and hire AI experts.
ChatGPT was banned for users in Italy but again became available in April after it was banned temporarily for the concerns raised by the national data protection authority in March.
In April, France’s privacy watchdog investigated several complaints about ChatGPT after this chatbot was temporarily banned in Italy due to a suspected breach of privacy rules.
In March, the National Assembly of France approved the use of AI video surveillance during the 2024 Paris Olympics by overlooking warnings from civil rights groups.
Principles Guiding The Regulation Of AI
Since there is uncertainty about AI’s future use, there has been a question in people’s minds or doing rounds about its unfettered development.
Its insane growth has raised questions about privacy, monopolization, and job losses.
It has also raised significant concerns about AI’s economic and privacy implications as more and more countries are trying to design regulations to prevent the misutilization of AI by all and sundry.
Certain fundamental principles should guide the regulation of AI, and those are discussed below:
AI-generated systems should pose a transparent view and be explainable to make it easier for humans to understand the exact process of working and making decisions.
In short, it should be easier to understand, and documentation is needed to explain the exact process of working.
Also, AI systems should provide explanations based on their decision-making level so that humans can understand why a particular decision was made.
It should be clear who is responsible for the AI system and who is liable for the harm that it causes. There should be accountability for the development and deployment of AI systems.
Also, note that specific mechanisms in place are holding back the AI developers and deployers accountable for their actions.
The design of the AI system should be safe and secure, and it shouldn’t cause any harm to humans or property. Also, protecting it from cyber attacks is of utmost importance, and there should be some mechanisms for detecting and responding to safety risks.
Fairness And Non-Discrimination
AI systems should avoid bias or discrimination, which signifies that they shouldn’t make any decisions based on race, gender, or sexual orientation.
Also, AI systems can detect and mitigate bias in their decision-making.
Most importantly, AI systems should respect the privacy of individuals, which means that AI systems shouldn’t collect or use personal data without the individual’s consent.
Therefore, protecting personal data from unauthorized access and use is highly essential.
AI systems should be able to control humans and not be capable of independent actions that can harm humans.
This, in turn, indicates that AI systems should be unable to make decisions that will harm humans without human intervention.
But it should also be taken into account that AI systems should be designed to allow humans to override their decisions quickly.
AI Regulation Around The World And Its Future
This article mainly focuses on the activities of a few of the world’s biggest regulatory superpowers trying to regulate AI, along with the efforts of AI within many territories around the world.
There is plenty of activity concerning AI regulation around the world, and presently, it is impossible to say how things will shape up with the fast pace of this moving technology.
However, due to AI’s continuously evolving nature and the fluidity of the proposed regulation, it will take time to predict the future.
Nonetheless, the joint synergy in all the proposals aims to protect shared values, prevent misinformation, and shield society from harm.
Keeping this in mind, whether you are using it or in the development process in such a hyper-connected digital area, it is essential to familiarize yourself with the rules and regulations of the territory where the technology was created.
Why Is AI Regulation Necessary?
AI regulation mainly needs to be done for a handful of reasons:
- Governments and companies use AI to make decisions that have a significant impact on our lives.
- Someone has to be accountable for the decisions they make and also for the way they affect us. So, in short, when individuals don’t follow the agreed norms and make decisions that cause harm to individuals, then they are answerable for that.
- AI can also be biased, which in turn may lead to discrimination against people based on their race, gender, sexual orientation, or any other factors. So, it is essential to prevent discrimination by setting up the standards that will mitigate the bias done by AI.
- As AI collects lots of private information from people, protecting these data from unauthorized access and usage is essential.
- AI also creates fake news and misinformation that can harm society. So, preventing AI from spreading misinformation and detecting fake news is essential.
- Autonomous weapons pose severe threats to humanity or society, so prohibiting their usage is necessary to prevent serious threats.
As we come to an end, it is evident that the future of AI regulation is uncertain. Still, it will play a significant role in ensuring that AI is used responsibly and ethically.
Since the development of AI regulation is a complex and challenging task, it profoundly impacts our lives. Therefore, by joining hands, we can create a set of regulatory frameworks to ensure the proper usage of AI.
If at all AI can be regulated, how should it be done? Share your thoughts in the comment section below.