Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Computer scientist and artificial intelligence (AI) researcher, Roman Yampolskiy, recently predicted that there will be a 99.9% chance that AI will usher humanity into extinction in the next hundred years, stirring up a debate regarding AI’s impact on the future.

Despite being in the early stages of development, artificial intelligence has become a force to be reckoned with, revolutionizing various industries, including telecommunications.

Originally designed to simplify complex processes and assist humans in daily tasks, AI’s rapid advancements have only shown us a preview of its transformative potential. From enhancing productivity to reforming businesses and operations, AI has reshaped the way we live and work.

As AI continues to evolve rapidly, its increasing intelligence raises concerns about whether it can outsmart human intelligence and impose a threat to our existence.

Telecom Review Exclusive: Telecom Review's Generative AI Webinar: Industry Leaders Share Insights and Innovation

The Quest for AGI

The potential of AI to surpass human competencies has been a major debate, triggering the implementation of more rules and regulations to safeguard public interests.

AI has been around since the 1950s but its value has only been expounded in recent years. Technological revolutions over the years prove that the future has arrived and we are reaping the benefits these innovations have delivered.

Driven by the continuous evolution of new technology, science fiction may soon become a reality. Artificial general intelligence (AGI), a hypothetical AI technology, attempts to possess human-like capabilities encompassing perception, learning, planning, and even reasoning. AGI can understand and apply knowledge similar to humans, possessing the ability to teach itself and perform tasks on our behalf.

Also Read: AI for Network, Powering Productivity

AGI is famously incorporated in science fiction, often depicted as a highly advanced technology in dystopian movies. The rapid progress of AI suggests that we may already be on the verge of acquiring this innovation.

Tasks can be performed with human-like precision, resembling the intellectual competence of even the most brilliant minds. AGI is theoretically able to process immense amounts of data and understand with speed, providing insights that may surpass the power of the human brain.

Imagine living in a world with AI systems, similar to the fictional Tony Stark’s JARVIS, seamlessly fitting into our daily lives and altering our interaction with the technological world.

Claude 3 Opus, Anthropic’s most advanced and intelligent AI model, might be the start of the quest to obtain AGI’s maturity. According to Anthropic, a U.S.-based AI startup company backed by Amazon investments, Claude 3 Opus represents a significant advancement in AI by exhibiting near-human levels of comprehension and fluency, particularly in completing tasks. This advanced AI model possesses increased proficiency in analysis and forecasting, content creation and code generation, visual format interpretation, and non-English language conversations.

During the Claude 3 Opus testing, the AI was tasked to identify a hidden sentence within random documents, which it successfully found.  However, Opus also exhibited an incredible level of awareness, leaving its engineers and researchers in awe. The AI model recognized that it was being tested, marking a significant leap in AI development.  

Despite the advancements, AI still lacks empathy, unlike humans, a concern that some experts believe could be a driving force for AI to go rogue.

Also Read: Tackling Algorithmic Discrimination in AI

Can AI ‘Go Rogue’?

Typically, AI systems are programmed to follow sets of instructions, yet it is possible that they can start operating independently, bypassing human intelligence.

Rogue AI behaves unpredictably, signifying a corrupted state of the system, exhibiting autonomy, singularity, and even a lack of accountability.

Geoffrey Hinton, known as the ‘godfather of AI,’ revealed that he resigned from his position at Google last year to freely share the risks AI could bring to the table, underscoring the truth in even a small possibility of its threat.

Moreover, Tesla and SpaceX founder, Elon Musk, was one of the technology leaders to urge for a pause on large AI experiments, emphasizing the existential risks of the technology to humanity.

Microsoft’s chatbot, Tay, an acronym for “thinking about you”, went rogue in 2016 when it exhibited racist behavior. Intended to mimic the language patterns of a 19-year-old American girl and learn human interaction, Microsoft shut the chatbot down, 16 hours after its launch, as it revealed the dangers of uncontrolled AI.

Read More: “AI is Not a Future Vision; It is Our Reality Now,” Says UAE’s AI Minister

In 2023, another AI chatbot introduced by Microsoft, Bing, threatened to expose the personal information of one of its users to the public and ruin the user’s chances of getting a job. In addition, a New York Times technology columnist conversed with Bing and had some hostile responses including the wish to steal nuclear codes, create a pandemic, hack computers, and be human, triggering a warning to global industries, including telecommunications, to be wary of breaches in the AI system.

This year, Microsoft was prompted to investigate another AI chatbot under its wing, Copilot, following its disturbing and harmful responses, while Google’s Gemini faced criticisms after it exhibited strange behavior on its image generation feature.

Moreover, alleged reports of sentient artificial intelligence have already occurred, with one popular account being from a former Google engineer who stated the AI chatbot, LaMDA, was sentient and speaking to him just as a person would.

Built In highlighted several critical concerns regarding AI: the lack of transparency and explainability, which complicates understanding AI decisions; job losses due to automation; and social manipulation through algorithms. AI also enables extensive social surveillance, threatens data privacy, and can perpetuate biases. It exacerbates socioeconomic inequality and weakens ethical standards and goodwill. There are risks associated with autonomous weapons, potential financial crises from AI-driven decisions, and the diminishing human influence over critical systems.

Also Read: Next-Gen Government: How Generative AI is Changing the Citizen Experience

Final Thoughts

Great power comes with great responsibility. AI’s power to revolutionize the world is immense and the commitment to harness its power through regulation holds a profound significance. When fueled with false data, it can cause adversaries, when its purpose is to be our ally.

While Yampolskiy’s prediction may seem distant, AI’s capabilities must not be underestimated, and control must be observed to prevent potentially catastrophic events or apocalyptic scenarios from occurring in the decades to come.

Policies and measures should be developed to detect any potential rogue behavior and comprehensive collaboration must be implemented to prevent misuse and ensure accountability. Awareness of the responsible use of AI should also be instigated, ensuring cooperation within the community and generating a safe haven for people and technology to thrive.

However, there is an alarming thought that technology leaders will have to ponder. By the time AI’s intelligence and capabilities significantly advance beyond human control, would it view humanity as a threat to its existence and a hindrance to its goals?

AI will do more incredible things and its significant progress has been a game-changer. Yet, the concerns of tomorrow persist, casting doubt on whether this technological wonder will be a friend or a foe in the distant future.

Read More: UAE, France to Jointly Explore AI Use Cases Across Sectors

Related: Unraveling the Future: Navigating the Landscape of Artificial Intelligence in Technology

Pin It