Golden Age Now: Toward a Bright Future

Politics & Goverance

AI Risks: One Ring To Rule Us All? (Part 2)

10 MINUTES READ
By Patrick Rogers
- Senior Writer
Share this article

Editor’s note: This is Part Two of our series on the dangers of the proliferation of advanced AI technology. It focuses on solutions to these threats. The article continues with our metaphorical theme that borrows heavily from J.R.R. Tolkien’s epic trilogy, Lord of the Rings.



In the end, the battle for the future of artificial intelligence (AI) is not just a technological struggle but a moral and spiritual one. By confronting and overcoming dark forces bent on bending amazing new technologies to their own selfish ambitions, we can ensure that AI illuminates our path to a brighter, more just, and harmonious world.

To that end, let’s examine solutions to the threats posed by the swiftly evolving tool known as AI—once again using Lord of the Rings as our enlightening metaphor.

lit votive candles


The Council of Elrond: international collaboration

Just as in Lord of the Rings, the Council of Elrond gathered the Free Peoples of Middle-earth to address the threat of the One Ring, so too must nations come together to create a unified front against the misuse of AI. 

Establishing international coalitions focused on AI ethics and safety is a good beginning. Unilateral actions by nations to put boundaries around advanced AI technologies also are essential.

Controls on high-end microprocessors 

For example, the United States has implemented stringent export controls that restrict the export of certain microchip processors and other critical components essential for developing advanced AI systems​.

As a case in point, California-based NVIDIA, which dominates the global market for specialized AI microprocessors (GPUs), is subject to several US government restrictions.

GPU stands for Graphics Processing Unit, a specialized electronic circuit designed to accelerate the creation and rendering of images, videos, and animations. GPUs are also widely used in artificial intelligence and machine learning applications, due to their ability to handle parallel processing tasks efficiently.

Due to concerns over national security and technology leadership, the restrictions are intended to prevent these countries from gaining access to advanced semiconductor technology that could strengthen their military capabilities and artificial intelligence development. 

Here are the key points of these restrictions:

  • NVIDIA must obtain special licenses from the US Department of Commerce before exporting certain advanced GPUs to China and Russia.
  • The restrictions focus on high-performance GPUs designed for AI and supercomputing applications, such as the A100 and H100 chips. These chips are crucial for training large AI models and performing complex computational tasks.

The US government’s concern is that these advanced chips could be used to enhance military capabilities, conduct cyber warfare, or develop surveillance technologies.

These restrictions are part of broader export control regulations aimed at preventing the proliferation of advanced technologies that could be used against US interests.

The restrictions are also part of the larger context of the US-China tech rivalry, where the US wants to maintain its technological edge and prevent China from catching up in critical areas like AI and semiconductor technology.

Enhanced cyber defenses and limits on military applications of AI

Spurred by US government regulations and initiatives, US defense contractors are actively engaged in bolstering cybersecurity defenses across critical sectors. 

This includes investments in AI-driven cybersecurity tools that can detect and respond to threats more effectively, as well as the establishment of specialized cyber units within military and intelligence agencies​.

US government agencies spearheading the effort 

Here’s a look at the agencies involved:

Evidence of defense contractors’ commitment

There is substantial evidence that US defense contractors are fully committed to these cybersecurity initiatives.

  • Defense contractors actively work to comply with the CMMC standards, which require different levels of cybersecurity measures depending on the sensitivity of the information they handle. This compliance is necessary to bid on DoD contracts, so there is widespread adoption.
  • Major defense contractors, such as Lockheed Martin, Northrop Grumman, and Raytheon Technologies, have significantly invested in cybersecurity infrastructure and personnel. They have established dedicated cybersecurity divisions and routinely collaborate with government agencies to share threat intelligence and best practices.
  • Defense contractors regularly participate in cybersecurity exercises and simulations conducted by government agencies to test and improve their defenses against cyberattacks. 

Overall, the concerted efforts by US government agencies and the proactive measures taken by defense contractors demonstrate a strong commitment to enhancing cybersecurity across critical sectors.

The Fellowship of the Ring: multidisciplinary teams

Forming multidisciplinary teams—comprised of AI researchers, ethicists, policymakers, and industry leaders—mirrors the diverse Fellowship of elves, wizards, dwarfs, hobbits, and humans that sought to destroy the One Ring. These teams must work together to develop and, where possible, enforce ethical guidelines. 

people sitting in a circle
[Image source]

To ensure full and open participation of AI stakeholders from all points of view, a full range of organizations with different perspectives is needed to advocate for responsible AI use. 

The Partnership on AI is one such organization, albeit with its own particular points of view. This alliance collaborates to promote the responsible and ethical development, deployment, and use of artificial intelligence technologies.

Some notable members are:

  • Technology companies: Google, Amazon, Apple, Facebook, IBM, Microsoft
  • Academic institutions: MIT, Stanford University, UC Berkeley, Harvard University
  • Civil society organizations: American Civil Liberties Union (ACLU), Amnesty International, Human Rights Watch
  • Policy and research groups: The Center for Democracy & Technology, The Future of Humanity Institute, The Alan Turing Institute

Rivendell’s archives: education and awareness

Education is the Rivendell of our metaphor: a repository of knowledge that arms us against ignorance. Rivendell, home of the wise and powerful ruler Elrond Half-elven, is a sanctuary and center of learning and healing in Middle-earth. Elrond maintains extensive archives of ancient lore, histories, and maps.

Just as the archives in Rivendell helped guide the characters in Lord of the Rings, comprehensive AI education empowers society to navigate the complexities of AI so that its development and deployment will benefit humanity. 

The collaborative efforts of academic institutions, government agencies, industry leaders, and non-profit organizations are essential in this endeavor.

This educational initiative is critical for several reasons:

  1. Informed decision-making: Educated decision-makers can create effective policies and regulations that harness the benefits of AI while mitigating its risks.
  2. Public awareness: Educating the public leads to a more informed society that can engage in meaningful discussions about AI and its impact.
  3. Ethical considerations: By understanding AI’s potential ethical dilemmas, global society can make sure that the technology is developed and used responsibly.

The responsibility for organizing and promoting education on AI falls to academic institutions, government agencies, industry leaders, and the widest possible range of interest groups.

The Palantír’s watch: robust monitoring systems

To prevent AI from falling into the wrong hands, we need vigilant monitoring systems akin to the watchful Palantíri. In Tolkien’s epic, the Palantíri are ancient and powerful seeing stones that allow users to communicate with each other and see events across vast distances.

Nations and organizations should implement rigorous oversight mechanisms to detect and mitigate misuse. This includes tracking the development and deployment of AI technologies, similar to how the Financial Action Task Force globally monitors and combats money laundering and terrorist financing.

The European Union’s AI monitoring model

The recently enacted European Union’s AI Act regulates the development, commercialization, and use of artificial intelligence within the EU. 

Though discussing the pros and cons of Europe’s AI Act is beyond the scope of this article, its tiered risk levels are worth mentioning as an interesting AI monitoring model. 

The Act categorizes AI systems into four risk levels: unacceptable risk, high risk, limited risk, and minimal risk.

  1. Unacceptable risk: Banned practices, such as social scoring by governments and certain types of biometric surveillance.
  2. High risk: AI systems in sensitive areas like critical infrastructure, education, employment, and law enforcement that require stringent compliance measures.
  3. Limited risk: AI systems that need specific transparency obligations, like informing users they are interacting with AI.
  4. Minimal risk: Most common AI systems. These are subject to the least regulatory burden.

High-risk AI systems must meet strict requirements related to data governance, documentation, transparency, human oversight, and robustness.

Non-compliance can result in fines of 6% of the global annual turnover for companies, up to €30 million.

The sentinel Towers: regulatory frameworks

Just as the Towers stood as sentinels over Middle-earth, robust regulatory frameworks must guard against AI’s misuse. 

Governments need to create and enforce laws that prevent the development and proliferation of AI weapons, similar to the Nuclear Non-Proliferation Treaty. This includes implementing strict controls over dual-use technologies that could be repurposed for harm.

The destruction of the ring: decommissioning malicious AI

Finally, much like Frodo’s mission to destroy the One Ring, we must be prepared to decommission AI systems that pose threats. This involves developing strategies for safely dismantling or disabling rogue AI technologies. 

Organizations like the Center for Human-Compatible AI at UC Berkeley are needed to drive research into this and other challenging AI issues. 

The final call: spiritual and ethical evolution

In the unfolding epic of artificial intelligence, our journey to harness its immense potential is shadowed by formidable adversaries, akin to the dark forces of Mordor that sought to reclaim the One Ring. 

volcanic mountain spewing smoke


These modern-day threats, like Sauron’s minions, are numerous, varied, and relentless in their pursuit of power and domination. To safeguard our planet and its people from the nefarious and destructive use of advanced AI, we must first understand the nature of these dark forces and devise strategies to contain and overcome them.

However, ultimately the journey to harness the full potential of AI while avoiding its perils demands of humanity a profound spiritual and ethical evolution. 

As technology advances exponentially, we, as the drivers of this powerful technology, must together cultivate a deeper sense of the purpose of life—combined with a strong determination to preserve the very best of what it means to be human. 

Simply put, to meet the inexorable AI challenges we will face in the coming years, we must fully mirror the wisdom and courage exemplified by the heroes of Middle-earth.


By Patrick Rogers
Patrick Rogers has worked in journalism as a newspaper reporter, a health news editor, and a university writing instructor. He also is a fiction author and a wildly optimistic fellow. He welcomes your comments and questions at patrick@goldenagenow.com.
Share this article

How Taiwan Youth Protests Shaped Its Democracy

By Patrick Rogers
democracy, protests, Taiwan, youth

AI Risks: One Ring to Rule Us All? (Part 1)

By Patrick Rogers
ai, AI dangers, artificial intelligence

Life Expectancy Dramatically Improves in Singapore. Why?

By Patrick Rogers
life expectancy, Singapore

Search through all of our posts