A Leap into the Future: The Horizon of AI with the European Parliament’s Landmark Act

Hossein Azarbaijani
The AI Insights
Published in
3 min readMar 16, 2024

--

In an era where artificial intelligence (AI) intricately weaves through the tapestry of our daily lives, based on the story of the official website of the European Parliament has taken a substantial stride into the future with the adoption of the AI Act.

This act, ratified with an overwhelming majority, symbolizes a watershed moment in the global conversation about AI. It brings to my mind the prescient work of Isaac Asimov, a luminary in the 20th century whose Three Laws of Robotics have long served as philosophical bedrocks for the ethical development of automated beings.

Isac Assimov

Asimov’s first law, which prohibits a robot from harming a human or, through inaction, allowing a human to come to harm, resonates through the AI Act’s firm stance on banning certain AI applications that compromise citizens’ rights.

The prohibition against biometric categorization systems based on sensitive characteristics and the untargeted scraping of facial images for recognition databases not only echo Asimov’s imperative for safety but also underscore a commitment to protecting our fundamental human rights against the tidal forces of unchecked technology.

The Act’s nuanced approach to law enforcement’s use of biometric identification systems, permitting such utilities only under exhaustively listed and narrowly defined situations, similarly evokes Asimov’s second law: robots must obey the orders given to them by human beings, except where such orders would conflict with the First Law.

This principle of conditional obedience, calibrated by the highest considerations of human safety and rights, highlights the AI Act’s attempt to balance the scales between harnessing AI’s potential for public good and safeguarding against its perils.

However, Asimov’s third law, which states a robot must protect its existence as long as such protection does not conflict with the First or Second Law, opens a complex dialogue about AI’s self-preservation in the context of high-risk applications. The AI Act mandates rigorous obligations for AI systems in critical domains — healthcare, education, law enforcement — to minimize risks, ensure transparency, and maintain human oversight.

Yet, one ponders:

How do we navigate the thin ice between an AI’s “self-protection” mechanisms and the imperative to prioritize human interests unequivocally?

Where does the “existence” of an AI system begin and individual autonomy end?

From a cognitive neuro-science perspective, the Act’s emphasis on transparency, accountability, and the rights of individuals to comprehend and contest AI-driven decisions echoes the cognitive cornerstone of human agency.

The requirement for AI systems, especially those of general purpose, to publish detailed summaries of their training content aligns with the neuroscientific understanding of how transparency fosters trust, a critical component in the human-machine interface.

Yet, as we stand at this crossroads, a bevy of philosophical and ethical questions burgeons.

Can AI, however wrapped in regulations, truly comprehend the nuanced moral landscape that governs human society?

How do we ensure that AI systems, tasked with making decisions in “high-risk” domains, adequately encapsulate the vast spectrum of human values and ethical considerations?

And importantly, how do we reconcile the acceleration of AI innovation with the inherently slow pace of ethical and legislative scrutiny?

In vie of Asimov’s visionary contributions, the AI Act serves as a significant step forward, yet it is but the opening gambit in a long, contemplative game of chess between humanity and its creations.

As we embark on this journey, it behooves us to continually interrogate not only the technological capabilities of AI but also the philosophical and ethical frameworks that underpin its integration into society.

We are observing from Altern.ai monitoring tower; our venture into this brave new world of AI necessitates a balancing act of unprecedented delicacy: facing the vast potential of AI-driven innovation while steadfastly anchoring our efforts in the bedrock of human dignity, rights, and ethical principles.

As we trailblaze into this uncharted territory, let us carry forward the torch of inquiry and reflection, illuminating our path with the wisdom of the past and the foresight of the future.

--

--