Aug
The Impact of AI
In The Age of AI, which Henry Kissinger co-wrote with Eric Schmidt and Daniel Huttenlocher, Kissinger tried to warn us that AI would eventually have the capability to come up with conclusions or decisions that no human is able to consider or understand. Put another way, self-learning AI would become capable of making decisions beyond what humans programmed into it and base such conclusions on what it deems the most logical approach, regardless of how negative or devastating the consequences can be.
A common example to illustrate this point is how AI had already transformed games of strategy like chess, where given the chance to learn the game for itself instead of using plays programmed into it by the best human chess masters, it executed moves that have never crossed the human mind. And when playing with other computers that were limited by human-based strategies, the self-learning AI proved dominant.
When applied to the field of warfare, this could possibly mean AI proposing or even executing the most inhumane of plans regardless of human disagreement simply because it considers such a decision the most logical step to take.
The Influence of AI
As part of Kissinger’s warning, it’s been noted just how far-reaching AI’s influence already is in modern life, especially with its usage in innocuous things such as social media algorithms, grammar checkers, and the much-hyped ChatGPT. With the growing dependency on AI, there runs the risk of human thinking being eclipsed by machine-based efficiency and effectiveness. And how it arrives at such efficient and effective decisions becomes questionable because it could become difficult or near impossible to trace what it has learned along the way.
Just imagine someone making a decision influenced by the information fed to them by AI and yet failing to rationalize the thinking behind such a decision. That particular human may not realize it, but at that point they’re living in an AI world, where human decision-making is imitating machine decision-making rather than the reverse. It was this interchangeability Alan Turing was referring to with his famous postulate about artificial intelligence — the so-called “Turing Test” — which holds that you haven’t reached anything that can be fairly called AI until you can’t tell the difference.
Copyright Pavel Danilyuk
Appropriate Use of AI
However, it’s been pointed out that the book doesn’t follow “AI fatalism,” a common belief wherein AI is inevitable and humans are powerless to affect this inevitability. The authors wrote that we are still capable of controlling and shaping AI with our human values, its “appropriate use” as we at Cascade Strategies have been advocating for quite some time. We have the opportunity to limit or restrain what AI learns or align its decision-making with human values.
Kissinger had sounded the warning while others had already made calls to start limiting AI’s capabilities. We are hopeful that in the coming years, with the best modern thinkers and tech experts at the forefront, we progress to more of an AI-assisted world where human agency remains paramount instead of an AI-dominated world where inscrutable decisions are left up to the machines.
Jun
Appropriate Use of AI
jerry97890 comments artificial intelligence, Brand Surveys and Testing, Brandview World, Burning Questions
The Rise Of AI
Believe it or not, Artificial Intelligence has existed for more than 50 years. But as the European Parliament pointed out, it wasn’t until recent advances in computing power, algorithm and data availability accelerated breakthroughs in AI technologies in modern times. 2022 alone made AI relatively mainstream with the sudden popularity of OpenAI’s ChatGPT.
But that’s not to say that AI hasn’t already been incorporated in our daily lives- from web searches to online shopping and advertising, from digital assistants on your smartphones to self-driving vehicles, from cybersecurity to the fight against disinformation on social media, AI-powered applications have been employed to enable automation and increase productivity.
The Woes Of AI
However, the rise of AI also brings concerns and worries over its expanding use across industries and day-to-day activities. Perceived negative socio-political effects, the threat of AI-powered processes taking over human employment, the advent of intelligent machines capable of evolving past their programming and human supervision- that last one is mostly inspired by the realm of science fiction but a plausible possibility nonetheless. A more grounded and present-day concern, however, is the overreliance and misuse of Artificial Intelligence.
Copyright geralt (Pixabay)
Sure, AI is able to perform a variety of simple and complex tasks by simulating human intelligence, efficiently and quickly producing objective and accurate results. However, there are some activities requiring discernment, abstraction and creativity, where AI’s approximation of human thinking falls short. Cognitive exercises like these not only need high-level thinking but also involve value judgments honed and subjected by human experience.
The Expedia Group Case Study
This brings us to our case study for the Expedia Group, whose brand has around a million hospitality partners. Their goal is to increase engagement with their partners. For five years, Expedia grouped their lodging partners, which at the time were mostly chain hotels, with a segmentation model that helped guide their partner sales teams on how they should prioritize spending their time. This “advice” Expedia provides comes through marketing, in-product or through the partner’s account manager. When a partner takes advantage of Expedia’s advice, they usually receive the booking over their competitor.
Copyright geralt (Pixabay)
Now you can imagine that Expedia has thousands of advices or recommendations to give their partners. So how does Expedia determine which recommendation will most likely push their partner to act accordingly and produce optimal revenue?
If you answered “Use AI,” you’re on the right track. With thousands of possible decisions, Expedia just wants AI to filter the bad choices and boil it down to a few but good recommendations optimizing revenue. Expedia wants to use AI to help with decisions, but it doesn’t want AI to make that decision for them or their partners.
Copyright Seanbatty (Pixabay)
But now things are different- Expedia’s partners have grown to also include independent hotels and vacation rentals. So what if Expedia adds additional dimensions to the model allowing them to target partners with recommendations that would be best for their way of thinking and feeling, as well as appeal to their primary motivations as a property?
So that’s exactly where Cascade Strategies stepped in. We followed a disciplined process where — just to name a few things we’ve performed — we interviewed 1200 partners and prospects across 10 countries in 4 regions, converted emotional factors into numeric values and used advanced forms of Machine Learning to arrive at optimal segmentation solutions. Through this five-step disciplined process, we built them a psychographic segmentation formed into subgroups based on patterns of thinking, feeling and perceiving to explain and predict behavior.
Copyright Pavel Danilyuk
It “conceived the game anew” for Expedia Group (in a way suggested by Eric Schmidt and company in their book The Age of AI: And Our Human Future). Now seeing their partners in a different light, they needed to evolve their communications to reflect the new way they view them with the end goal of targeting which segment with which offer. The messages they would deploy should be very action-oriented based on what compels each segment.
Cascade Strategies then created an application called Scenario Analyzer to make this easy for people at Expedia. Its users could just ask the Scenario Analyzer what’s the optimal decision for certain input conditions. Basically, a marketer selects a target group and a region then the Scenario Analyzer answers by saying “You could do any of these six things and you’d make some money. It’s your call.”
If the partner does nothing, Expedia still makes about $1.5 million from these partners during a 90-day period, which is part of their regular business momentum. However, if the partner acts on the top-ranked recommendation which carries the message “Maximize your revenue potential by driving more groups or corporate business to your property,” it would result in about $140,000 more during the same period, which is about a 1% gain. While we couldn’t reach all partners with the same message, causing us to lower our expectations a little, we did slightly better than we expected to do in the end.
The “Appropriate Use” of AI
So what did we did do? We made “Appropriate Use” of AI. It neither made the decision nor guaranteed the money. It warded off the worst ideas and told us which recommendation was best in comparative terms.
Many people in marketing are treating AI as the next cool thing, so they want to jam it in wherever they can, whether it’s helpful or not. “Appropriate Use” stands against that, saying the best way to apply AI to marketing is for Decision Support to remain under human discretion and judgment, instead of letting AI actually make choices.
We think AI can at times be a very poor decision maker but a very good advisor. And we’re not alone as many others share our concern; to illustrate, 61% of Europeans look favorably at AI and robots while 88% say these technologies require careful management.
Another example to consider when thinking about just how important human intervention is when it comes to the “Appropriate Use” of AI is the topic of health care. As noted by frontiersin.org, the legal and regulatory framework may not be well-developed for the practice of medicine and public health in some parts of the world. Throwing artificial intelligence into the mix without careful and thoughtful planning might underscore or aggravate existing health disparities among different demographic groups.
And this is part of the reason why we believe in shaping AI with human values, including the dignity and moral agency of humans. The “defining future technology” that is AI is already proving to be a powerful tool for providing solutions and achieving goals, but it can only unlock levels of excellence, innovation and integrity when guided appropriately by human values and experience.
Other interesting reads:
https://www.wgu.edu/blog/what-ai-technology-how-used2003.html#close
https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp