Home / Blogs

Rethinking AI Regulation: Why Existing Laws Are Enough - If We Enforce Them

Since AI made a profound impact on our society, calls for regulation increased. Before enacting new legislation, assessing whether existing legal frameworks can address AI-related issues is prudent.

History demonstrates that legal systems have adapted—rather than been overhauled—to manage disruptive technologies. AI should prove no exception.

Back to Basics: Law Governs Humans, Not Tools

The law regulates relations among people and the entities they create, such as companies and governments. AI is a tool that cannot be regulated, but people who develop, deploy, and benefit from AI.

If a self-driving car malfunctions, product liability statutes hold manufacturers accountable. If we get to the gist of law, we can see that we do not need specialised AI law.

Just as societies adapted property and contract laws to the use of horses in the 19th century and applied existing legal frameworks to the internet, AI requires no novel laws. Legal principles like liability, transparency, and justice already govern human actions, regardless of the tools involved.

The Section 230 Problem: A Modern Legal Hurdle

The core problems in the legal immunities of tech companies are outlined in Section 230 of the 1996 US Communications Decency Act. Designed to protect early internet platforms, it now shields companies from liability for AI-generated harms—deepfakes, harassment, and fraud.

Imagine a car manufacturer evading responsibility for faulty brakes by claiming, “The car drove itself,” even if cars are driverless. Section 230 enables this absurdity, allowing platforms to sidestep consequences for AI misuse. Revising this immunity shield, not creating new regulations, would address many AI legal problems and risks.

Reality Test for AI Rules

Before we enact a new AI law, we have to check if existing rules can deal with cases triggered by AI. This test can be done by following the AI pyramid layer by layer.

  • Layer 1: AI’s hardware and computational power are already governed by technical standards. AI farms are regulated by environmental laws that monitor energy and water consumption. The global flow of semiconductors is tightly regulated by the export control regime.
  • Layer 2: Algorithms and AI capabilities have been at the heart of regulatory debates, with concerns ranging from AI safety to alignment and bias. Initially, the focus was on quantitative metrics like the number of parameters or FLOPs (floating point operations per second). However, platforms like DeepSeek have turned this approach on its head, demonstrating that powerful AI inference doesn’t always require massive computational resources.
  • Layer 3: Data and knowledge, the lifeblood of AI, are already regulated by data protection and intellectual property laws. Yet, the courtroom drama unfolding today reveals the cracks in these frameworks. In the USA, OpenAI is battling the New York Times, while Universal Music Group is suing Anthropic. Across the Atlantic, Getty Images is taking Stability AI to court.
  • The Apex: AI uses is where AI’s societal, legal, and ethical consequences come into sharp focus. Whether it’s deepfakes, biased hiring algorithms, or autonomous weapons, the risks stem not from the technology itself but from how the way it is used.

The entire legal system—contract law, tort law, labour law, criminal law, and more—comes into play here. A foundational principle applies: those who develop or benefit from a technology must bear responsibility for the harm technology causes. This means holding companies accountable for harm caused by their AI systems, whether through negligence, misuse, or malicious intent.

Conclusion

History shows that transformative technologies—from railroads to the internet—demanded adaptation, not reinvention, of legal frameworks. AI is no exception. Existing laws on liability, intellectual property, and data can address AI issues. The core challenge lies not in regulatory gaps but in outdated carve-outs like Section 230, which shield tech platforms from accountability for AI-driven harms.

Societies can lower risks without stopping innovation by focusing legal attention on the people who make, use, and benefit from tools instead of the tools themselves. Update immunity laws, enforce transparency, and hold responsible for harm made by AI.

Read more here.

NORDVPN DISCOUNT - CircleID x NordVPN
Get NordVPN  [74% +3 extra months, from $2.99/month]
By Jovan Kurbalija, Director of DiploFoundation & Head of Geneva Internet Platform

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

Threat Intelligence

Sponsored byWhoisXML API

Domain Names

Sponsored byVerisign

Cybersecurity

Sponsored byVerisign

New TLDs

Sponsored byRadix

DNS

Sponsored byDNIB.com

IPv4 Markets

Sponsored byIPv4.Global

Brand Protection

Sponsored byCSC