Who's Online
Guest Users: 1159

Stats
3053 Pages Viewed
891 Unique Visits
What's New
Stories  last 2 weeks
My Account
Please Support Us With A Purchase







A Fully Indexed Page With Just Election Fraud Articles & Videos
A Fully Indexed Page With Just Medical/Pandemic Fraud Articles & Videos

The Biggest Threat of Artificial Intelligence Isn’t What We’re Being Told


 

TheDiscernReport.com

There's a lot of buzz around Artificial Intelligence and the dangers it poses. But I haven't seen anyone talking about the one thing that has kept me up some nights contemplating. Here it is...

Let’s cut to the chase. The biggest threat of Artificial Intelligence is dependency. To be more specific, society will become dependent on Artificial General Intelligence shortly after it’s unveiled to the world. Why? Because AGI will be able to solve problems we know we have. It will be able to solve problems we don’t know we have. It will be able to solve problems we don’t have yet but will in the future.

If it sounds like I’m giving too much credit to AGI, then you aren’t familiar enough with AGI to appreciate what it will be able to do. Below, I’ll try to explain it better even though it would probably take several chapters of a book to do justice to the risks. But first, I want to offer two perspectives as primers. Here’s an article by Mac Slavo at SHTF Plan describing what some tech experts are saying about AI. Then, I’ll let ChatGPT give its somewhat comical answer about the risks that AI represents. Finally, I’ll go into why dependency will be the big one. First, here’s Mac Slavo…

Tech Experts Say AI Poses a “Human Extinction” Risk

Experts on artificial intelligence are coming out and warning of a “human extinction” risk with the progressing technology. Sam Altman, the CEO of ChatGPT-maker OpenAI, along with executives from Google’s AI arm DeepMind and Microsoft, were among those who supported and signed the short statement.

“Contemporary AI systems are now becoming human-competitive at general tasks,” said the letter. “Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization?” the letter asked.

Other tech leaders such as Tesla’s Elon Musk and former Google CEO Eric Schmidt have cautioned about the risks AI poses to human society. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement Tuesday read according to a report by CNBC. 

In an open letter released back in March, Musk, Apple co-founder Steve Wozniak and several tech leaders urged AI labs to stop training systems to be more powerful than GPT-4, which is OpenAI’s latest large language model. They also called for a six-month pause on such advanced development.

The technology has gathered pace in recent months after chatbot ChatGPT was released for public use in November and subsequently went viral. In just two months after its launch, it reached 100 million users. ChatGPT has amazed researchers and the general public with its ability to generate humanlike responses to users’ prompts, suggesting AI could replace jobs and imitate humans.

The statement added that it can be “difficult to voice concerns about some of advanced AI’s most severe risks” and had the aim of overcoming this obstacle and opening up the discussions.

ChatGPT has arguably sparked much more awareness and adoption of AI as major firms around the world have raced to develop rival products and capabilities.

The consequences of putting humanity’s existence into the hands of artificial intelligence, which has no morals or compassion could be dire.

What Does AI Think of AI?

Since Artificial Intelligence is able to write articles, I asked ChatGPT to write one. The query was simple: “Please write an article about the threats of Artificial Intelligence.”

Here’s how ChatGPT answered the query…

The Growing Concerns: Unveiling the Threats of Artificial Intelligence (As Described by ChatGPT)

Artificial Intelligence (AI) has emerged as a transformative force across various sectors, revolutionizing industries and enhancing our daily lives. From autonomous vehicles to voice assistants, AI has undoubtedly made remarkable strides. However, as AI continues to advance, concerns regarding its potential threats and risks have also gained momentum. It is crucial to understand and address these issues to ensure responsible and ethical deployment of AI systems.

Job Displacement: One of the primary concerns surrounding AI is its impact on employment. As AI technology becomes more sophisticated, there is a legitimate fear that automation and intelligent machines could replace human workers in various industries. While automation has historically led to job shifts rather than complete job loss, it is essential to support affected individuals through upskilling and retraining programs to mitigate the potential negative effects on the workforce.

Ethical Considerations: AI systems are built on algorithms and machine learning models that learn from vast amounts of data. This reliance on data raises ethical concerns, particularly in areas such as privacy, bias, and fairness. If the data used to train AI models is biased or incomplete, it can perpetuate existing social and cultural biases or discriminate against certain groups. It is crucial to implement rigorous guidelines and practices to ensure fairness, transparency, and accountability in AI systems.

Security Risks: AI systems can be vulnerable to cybersecurity threats. As AI becomes more integrated into critical infrastructure, industries, and even military applications, the potential for exploitation and malicious use increases. Adversarial attacks, where AI systems are manipulated to make incorrect decisions, pose a significant risk. Safeguarding AI technologies against these threats requires robust security measures, rigorous testing, and continuous monitoring.

Loss of Human Control: As AI systems become more complex and autonomous, there is a concern that they may operate beyond human control or comprehension. This lack of transparency can lead to a loss of accountability and raise questions about who is responsible when AI systems make consequential decisions. Developing mechanisms for human oversight and control, along with ensuring explainability and interpretability of AI algorithms, is essential to maintain ethical standards and prevent unintended consequences.

Social Impact and Inequality: The widespread adoption of AI can exacerbate existing social inequalities. Access to AI technologies, including advanced healthcare diagnostics, personalized education, or AI-powered financial services, may not be equally distributed across all communities. This digital divide could further marginalize underprivileged groups and create a broader socio-economic gap. Efforts must be made to ensure equitable access to AI technologies and prevent the deepening of societal disparities.

Unemployment and Economic Disruption: AI-driven automation could lead to significant shifts in the labor market, potentially causing unemployment and economic disruption. Jobs that are routine and repetitive are most at risk. However, new job opportunities can also emerge as AI technology advances. It is crucial for governments, organizations, and educational institutions to adapt and prepare for these changes by investing in reskilling initiatives and fostering an environment that promotes job creation and entrepreneurship.

While AI holds immense promise, it is vital to approach its development and deployment with a comprehensive understanding of the potential threats and challenges. Striking a balance between innovation and responsible use is key to harnessing the full potential of AI while safeguarding against unintended consequences. A collaborative effort among governments, industry leaders, researchers, and society at large is necessary to ensure the ethical, secure, and inclusive advancement of AI technology for the benefit of humanity.

The Biggest Threat: Dependency

My concern isn’t that Artificial Intelligence won’t be able to fulfill its promise. My fear is that it will. The advancement into the realm of Artificial General Intelligence, which may or may not already exist, opens the door to the type of problem-solving that can and almost certainly will make humans dependent on it.

It would take a whole lot of writing to explain how advanced Artificial General Intelligence would be able to solve problems in ways that we’ve never imagined, but perhaps the easiest way to exemplify it is with the game “Go.” In recent years, Artificial Intelligence has been able to defeat the world’s best human players time and again. But it’s not just the fact that AI won the ancient game. It’s that AI won in ways humans hadn’t conceived before. In fact, many human champions are changing the way they play the game based on ways they saw AI win.

“Its style was different, and it was such an unusual experience that it took time for me to adjust,” said world champion Lee Sedol after losing to Google DeepMind. “AlphaGo made me realize that I must study Go more.”

Many if not most would see this as a benefit of Artificial Intelligence. If they can stretch human ingenuity, how could that be bad? The risk is that with Artificial General Intelligence, it wouldn’t be stretching human ingenuity as we try to keep up. AGI would go far beyond the possible capabilities of humans in nearly every challenge it faces. Then, it will be tasked with solving all of our problems, and unfortunately it will succeed.

It would be better at healthcare. It would be better at educating. It would be better at war. It would be better at art. Widespread, properly powered, sufficiently accessible Artificial General Intelligence would surpass humans in every way. And when it does that, we become dependent on it. No, I’m not referring to you or me as individuals. I’m talking about society. If we’ve learned anything the past three decades, it’s that when technology offers solutions or shortcuts, we accept them as a society every single time.

Advancing AGI won’t be revolutionary like the introduction of television, internet, or search engines. It will completely rewrite human existence because it will “make sense” to the vast majority of people to let AGI handle literally everything. We will think we are the masters of the most powerful tool man every created when in reality we will have created our own slavemaster.

Yes, with AGI, nearly every human on earth will eventually become its slave, and most won’t notice until it’s too late.

The SciFi writers got it wrong. This isn’t about a malevolent superintelligence like Terminator’s SkyNet or a strict purpose-driven totalitarian like in I, Robot. AGI will be extremely useful, so much so that we will no longer need to care for ourselves. And when people don’t need to care for themselves, many of them won’t. It’s a sloppy illustration, but the humans in the Pixar’s WALL-E are a near approximation of what humanity will become with AGI at the helm.

They’ll become dependent. They’ll become slaves. And they’ll fight with everything they have to stifle or remove those of us who object.

If you believe there will be a lot of us who fight the good fight and push back against AGI just because a lot of us are opposed to it today, you haven’t been paying attention to how quickly sentiment can change, especially here in America. Lest we forget, it was less than two decades ago that voters in progressive California voted against gay marriage. It took a couple of years for that to reverse, a few more years for opposition to gay marriage to become anathema, and then a few years after that the state entered full-blown LGBTQIA+ supremacy status.

Elon Musk et al are warning us about Artificial Intelligence today and many are hearing their concerns. But all it will take is AGI doing something beyond current human capacity before sentiment starts shifting rapidly.

This is just a cursory explanation of how AGI will create a fully dependent society in a very short period of time, but hopefully it’s enough to get you thinking. I’m cutting the explanation short because I want feedback. I’ll write a more comprehensive article with citations on the topic in the near future, but this will hopefully spark some conversations. Am I right? Partially right? Completely wrong?

  • Currently 0.00/5
Rating: 0.00/5 (0 votes cast)

Share It!




Trackback

Trackback URL for this entry: https://kickthemallout.com/trackback.php/The_Biggest_Threat_of_Artificial_Intelligence_Is_Not_What_We_Are_Being_Told

No trackback comments for this entry.

Login required to comment
Be the first to comment
US Debt Clock
Please Support Us With A Purchase






Please Make A One Time Donation
You can send a check
or money order to:
The KTAO Project
P.O. Box 1086
Crestone, CO 81131
or donate online:
Or Better Yet Become A Supporting Member
Important Web Sites




















Who's Online
Guest Users: 1159

Stats
3053 Pages Viewed
891 Unique Visits
What's New
Stories  last 2 weeks
My Account
Please Support Us With A Purchase