![]() |
||
|
Since AI made a profound impact on our society, calls for regulation increased. Before enacting new legislation, assessing whether existing legal frameworks can address AI-related issues is prudent.
History demonstrates that legal systems have adapted—rather than been overhauled—to manage disruptive technologies. AI should prove no exception.
The law regulates relations among people and the entities they create, such as companies and governments. AI is a tool that cannot be regulated, but people who develop, deploy, and benefit from AI.
If a self-driving car malfunctions, product liability statutes hold manufacturers accountable. If we get to the gist of law, we can see that we do not need specialised AI law.
Just as societies adapted property and contract laws to the use of horses in the 19th century and applied existing legal frameworks to the internet, AI requires no novel laws. Legal principles like liability, transparency, and justice already govern human actions, regardless of the tools involved.
The core problems in the legal immunities of tech companies are outlined in Section 230 of the 1996 US Communications Decency Act. Designed to protect early internet platforms, it now shields companies from liability for AI-generated harms—deepfakes, harassment, and fraud.
Imagine a car manufacturer evading responsibility for faulty brakes by claiming, “The car drove itself,” even if cars are driverless. Section 230 enables this absurdity, allowing platforms to sidestep consequences for AI misuse. Revising this immunity shield, not creating new regulations, would address many AI legal problems and risks.
Before we enact a new AI law, we have to check if existing rules can deal with cases triggered by AI. This test can be done by following the AI pyramid layer by layer.
The entire legal system—contract law, tort law, labour law, criminal law, and more—comes into play here. A foundational principle applies: those who develop or benefit from a technology must bear responsibility for the harm technology causes. This means holding companies accountable for harm caused by their AI systems, whether through negligence, misuse, or malicious intent.
History shows that transformative technologies—from railroads to the internet—demanded adaptation, not reinvention, of legal frameworks. AI is no exception. Existing laws on liability, intellectual property, and data can address AI issues. The core challenge lies not in regulatory gaps but in outdated carve-outs like Section 230, which shield tech platforms from accountability for AI-driven harms.
Societies can lower risks without stopping innovation by focusing legal attention on the people who make, use, and benefit from tools instead of the tools themselves. Update immunity laws, enforce transparency, and hold responsible for harm made by AI.
Read more here.
Sponsored byWhoisXML API
Sponsored byVerisign
Sponsored byVerisign
Sponsored byRadix
Sponsored byDNIB.com
Sponsored byIPv4.Global
Sponsored byCSC