• (425) 677-7430
  • info@cascadestrategies.com

Kissinger’s Warning on AI

Share

The Impact of AI

In The Age of AI, which Henry Kissinger co-wrote with Eric Schmidt and Daniel Huttenlocher, Kissinger tried to warn us that AI would eventually have the capability to come up with conclusions or decisions that no human is able to consider or understand.  Put another way, self-learning AI would become capable of making decisions beyond what humans programmed into it and base such conclusions on what it deems the most logical approach, regardless of how negative or devastating the consequences can be.

 

A common example to illustrate this point is how AI had already transformed games of strategy like chess, where given the chance to learn the game for itself instead of using plays programmed into it by the best human chess masters, it executed moves that have never crossed the human mind.  And when playing with other computers that were limited by human-based strategies, the self-learning AI proved dominant.

 

When applied to the field of warfare, this could possibly mean AI proposing or even executing the most inhumane of plans regardless of human disagreement simply because it considers such a decision the most logical step to take.

 

The Influence of AI

As part of Kissinger’s warning, it’s been noted just how far-reaching AI’s influence already is in modern life, especially with its usage in innocuous things such as social media algorithms, grammar checkers, and the much-hyped ChatGPT.  With the growing dependency on AI, there runs the risk of human thinking being eclipsed by machine-based efficiency and effectiveness.  And how it arrives at such efficient and effective decisions becomes questionable because it could become difficult or near impossible to trace what it has learned along the way.

 

Just imagine someone making a decision influenced by the information fed to them by AI and yet failing to rationalize the thinking behind such a decision.  That particular human may not realize it, but at that point they’re living in an AI world, where human decision-making is imitating machine decision-making rather than the reverse.  It was this interchangeability Alan Turing was referring to with his famous postulate about artificial intelligence — the so-called “Turing Test” — which holds that you haven’t reached anything that can be fairly called AI until you can’t tell the difference.

Copyright Pavel Danilyuk

 

Appropriate Use of AI

However, it’s been pointed out that the book doesn’t follow “AI fatalism,”  a common belief wherein AI is inevitable and humans are powerless to affect this inevitability.  The authors wrote that we are still capable of controlling and shaping AI with our human values, its “appropriate use” as we at Cascade Strategies have been advocating for quite some time. We have the opportunity to limit or restrain what AI learns or align its decision-making with human values.

 

Kissinger had sounded the warning while others had already made calls to start limiting AI’s capabilities.  We are hopeful that in the coming years, with the best modern thinkers and tech experts at the forefront, we progress to more of an AI-assisted world where human agency remains paramount instead of an AI-dominated world where inscrutable decisions are left up to the machines.

 

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

background

Tell us how we can help you

Cascade Strategies can serve your market research needs from the most straightforward to the most sophisticated project. Don’t hesitate to contact us to tell us about your next project, or your overall research needs in general. You can call (425) 677-7430 and ask for Jerry, Nestor, or Ernie. Or send us an email at info@cascadestrategies.com. We’ll get back to you quickly!

subscribe