• (425) 677-7430
  • info@cascadestrategies.com
Showing posts tagged with: human thinking

Can AI & Human Researchers Coexist In Market Research?

jerry9789
0 comments
artificial intelligence, Brand Surveys and Testing, Burning Questions

Share

AI In Market Research Today

With 90% of the world’s data created in just two years time between 2021 and 2023 and the global data volume standing at 149 zettabytes by 2024, it’s understandable why AI would be readily adopted by the market research industry.  Traditional methods of data collection and analysis would hold a place in market research but they simply aren’t as powerful as AI when it comes to handling all that staggering volume of data.  But is AI powerful enough to take the place of human researchers?  

AI enables research teams to move, process and analyze massive datasets with speed and accuracy, efficiently handling all the repetition and scale involved in the research process.  From drafting questionnaires to monitoring survey data quality, from analyzing open-ends to formulating dashboards and charts, AI fully automates the research process leading to faster and better decisions at a scale beyond the capabilities of human researchers.  

But is AI the endgame for market research? Does it make human researchers obsolete?  

Image: geralt

 

Cascade Strategies and AI

Cascade Strategies conducted a member perceptions study for a company looking to develop and implement a brand typology.  The overall goal of the study was to help them better understand their different customer type’s overall motivations and aspirations for more effective engagement.  As part of the study, we conducted an online survey with over 1,500 of their randomly selected members.  We then utilized an AI-assisted Self-organizing Map (SOM) to run all the cases recursively, sometimes millions of times, until it optimizes the separations among the groups.  The SOM produced a 6-group solution, with each group having a dominant passion that is served well or poorly by the company, ranging from proclivity for deals and new brands to yearning for customization and connection with other users.  

The AI has done the heavy lifting of scanning all that dataset, surfacing themes, and summarizing the respondents.  It has done enough to structure the story of each group but not enough tell or paint the whole picture.  

This is where the human researchers at Cascade Strategies step in.  We came up with names for each group that best described their dominant passion, names resonant enough that they not only convey an immediate idea of what they’re most passionate about but makes them fundamentally relatable even if one doesn’t necessarily share the same propensities: Shopper, Seeker, Learner, Sharer, Individualizer and Intellectual. 

In isolation, each group achieves the study’s goal of guiding the company on the most effective way to engage with them.  Their sum, however, grants the company an overview on how to improve and further develop its platform by considering and introducing new features that matter to one particular group, but would essentially benefit its membership base as a whole when implemented.  For example, the Sharer would appreciate increased opportunities to connect and interact with other experts and enthusiasts of the same interests in the platform by making it easier to make reviews and share content.  

AI surfaced all those patterns and signals from all that survey data, but it lacked the judgment and context to elevate it into a meaningful and coherent narrative.  Human researchers, on the other hand, saw what story can be told from all those themes and by layering in human understanding, they’re able to tie them down to actionable business decisions.  

Image: Christina Morillo

 

Leveraging AI In Market Research

So would AI replace human researchers?  We’d like to frame our response to this question with the words of Joseph Weizenbaum, one of AI’s early researchers:  “We can count, but we are rapidly forgetting how to say what is worth counting and why.”  

Yes, AI is powerful enough to handle large amounts of data to identify patterns, cluster themes, and summarize respondents, but it generates outputs rather than insights.  Outputs foster decisions rooted in logic and reasoning, but insights spring from judgment and context.  Outputs can provide directions and surface themes from which stories can be framed, but insights take it one step further by asking what matters and why it matters, adding depth and resonance to the story.  

In addition, Weizenbaum posits that computer programming can make decisions but it can’t ultimately choose.  Just like insights, choosing requires judgment which takes in emotions, values and experience.  

We at Cascade Strategies are among a growing number of proponents who believe that AI works best as a tool and extension of human intelligence and talents.  AI strips the friction from manual, repetitive work without compromising methodological rigor and accuracy, but rather than adopting it for the sake of automation, we choose to see it as a freeing and empowering agent that enables researchers to focus more on interpreting data with the context of human understanding and values, translating insights into sensible and confident business decisions.  Just as quantitative and qualitative research can coexist in the same study, we choose to live in a world where AI and human researchers work together towards the same goal of finding and crafting meaningful and relevant stories worth telling.  

Image: Pavel Danilyuk

 

Featured Image: Ron Lach

Top Image: kc0uvb

 

Read more

A Human Center Makes Market Research All The More Powerful

jerry9789
0 comments
artificial intelligence, Brand Surveys and Testing, Brandview World

Share

The Future Of Research Is Here

You’ve seen it and there’s no denying it.  Industries have been reshaped by the increasing utilization of Artificial Intelligence just in the last few years alone.  Promising and delivering speed and optimization at the fraction of the costs and resources, it’s powerful, revolutionary and exciting.  And as with any emerging technology, it comes with its own set of anxieties.  

In line with its growing popularity and adoption, people in different industries have been expressing nervousness over being replaced in their jobs by AI.  Certain repetitive, data-driven tasks are at the greatest risk of being supplanted by AI.  However, AI also opens up opportunities to shift focus and upskill the more complex and creativity-driven facets of work roles, creating new jobs or augmenting existing ones.  

The research industry is just as impacted by AI’s progressive application.  It’s naive to assume that researchers would be replaced wholesale by AI, but there’s more to delivering research results than just gathering and crunching data. 

Image: Circe Deyer

Our take on the integration of AI into research

We’ve always maintained that AI is a good advisor, but it’s a poor decision-maker.  We’d like to modify that by saying it’s an even worse storyteller, if at all.  

Cascade Strategies has been in the market research industry for over three decades now, serving some of the biggest local and international companies.  You can say we’ve seen it all in this industry, but we’re just as fascinated as everyone else by the mainstream popularity of AI in the past few years.  We’ve applied it in our methodologies, been impressed by its operational benefits and how it changed industries, but in the end, we know truthfully that it is not the end-all, be-all for research work.  We believe that AI would serve us better by being a powerful extension of human judgment, creativity, and insight. 

AI can be fed large datasets to approximate human thinking, but we believe it can never replicate human perspicacity, the kind of intelligence honed and guided by human values and experience.  Take a look at our Expedia Group Case Study where we’ve utilized AI to generate multiple revenue-granting scenarios, then tempered the decision-making process by applying high-level human thinking to craft messaging that resonates with the end-user.  

AI-driven research can produce results based on what has come before, but it can never uncover the truly novel, meaningful and resonant insights high-level human thinking unlocks.  These are the insights that empower big and sweeping decisions.  Data-based results from AI would seem lifeless and unrelatable.  But if they are imbued with human interpretation, that output elevates into a masterful narrative that sparks imagination, questions boundaries, and transforms perspectives.  

Image: geralt

Featured Image: mohamed mahmoud hassan

Top Image: geralt

Read more

AI’s Impact On Critical Thinking and Learning – What Studies Are Saying So Far

jerry9789
0 comments
artificial intelligence, Burning Questions

Share

Generative AI and Critical Thinking

On our last blog, we touched on two studies suggesting that Generative AI is making us dumber.  One of those studies, which was published in the journal Societies, aimed to look deeper into GenAI’s impact on our critical thinking by surveying and interviewing over 600 UK participants of varying age groups and academic backgrounds.  The study found “a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading.” 

Cognitive offloading refers to the utilization of external tools and processes to simplify tasks or optimize productivity.  Cognitive offloading has always raised concerns over the perceived decline of certain skills — in this instance, the dulling of one’s critical thinking.  In fact, the study found that cognitive offloading was worse with younger participants who demonstrated higher reliance on AI tools and less aptitude when it comes to their own critical thinking skills.  

Conversely, participants with higher educational backgrounds showed better command of their critical thinking no matter the degree of AI usage, putting more confidence in their own mental acuity than the AI-based outputs.  Aligning with our advocacy for the “appropriate use of AI,”  the study emphasizes the importance and appreciation of high-level human thinking over thoughtless and unmitigated adoption of AI technology.  

Copyright: jambulboy

Generative AI and Learning

In truth, a number of earlier studies have revealed that the arbitrary adoption of AI tools can be detrimental to one’s ability to learn or develop new skills.  A 2024 Wharton study on the impact of OpenAI’s GPT-4 demonstrated that unmitigated deployment of GenAI fostered overreliance on the technology as a “crutch” and led to poor performance when such tools are taken away.  The field experiment involved 1,000 high school math students who, following a math lesson, were asked to solve a practice test.  They were divided into three groups, with two of these groups having access to ChatGPT while the third had only their class notes.  One group of students with ChatGPT performed 48 percent better than those without; however, a follow-up exam without the aid of any laptop or books saw the same students scoring worse by 17 percent than their peers who had only their notes.  

What about the second group with the GenAI tutor?  They not only performed 127 percent higher than the group without ChatGPT access on the first exam, but they also scored close to the latter during the follow-up exam.   The difference?  Sometime down the line of their interactions, the first group with ChatGPT access would prompt their AI tutor to divulge the answers, resulting in an increased reliance on GenAI to provide the solutions instead of making use of their own problem-solving abilities.  On the other hand, the other group’s AI tutor version was customized to be closer to how real-world and highly effective tutors would interact with students: it would help by giving hints and providing feedback on the learner’s performance, but it would never directly give the answer.  

Similar tests with a GenAI tutor in 2023 studied the same issue of AI dependence and the value of careful deployment of AI tools.  Khanmigo, a GenAI tutor developed by Khan Academy, was voluntarily tested by Newark elementary school teachers, who belong to the largest public school system in New Jersey.  They came back with mixed results, with some complaining that the AI tutor gave away answers, even incorrect ones in some cases, while others appreciated the bot’s usefulness as a “co-teacher.” 

Other studies regarding the effectiveness of AI tutors have shown increases in learning and student engagement.  These studies have also shown that GenAI can help reduce the time it takes to get through learning materials compared to traditional methods.  One study that extolled the benefits of GenAI tutors involved Harvard undergraduates learning physics in 2024, and similar to the third group in the Wharton research, the AI was prevented from directly providing the answer to students.  It would guide the student throughout the learning process one step at a time, providing incremental updates of the student’s progress, but never outright telling them the answer.  There are merits to the idea of Generative AI as a teaching assistant, but it serves students better when it is positioned to engage one’s attention and abilities rather than induce dependence on it to generate the answers.

Copyright: Only-shot

 

Can We Use GenAI Without Making Us Dumber?

These studies shed light on how we should approach AI solutions and development, whether the end product is being deployed in learning, productivity or other relevant applications.  Beyond thoughtful planning and considerations on how AI tools would be deployed, there should be a focus on engaging the human faculties involved, with safeguards empowering man throughout the entire process instead of letting the machine take over the process wholesale.  AI technology is developing rapidly, but we can keep pace and remain reasonable as long as human engagement and empowerment is kept at the core of its utilization and adoption.  

Amid contemporary fears that anyone could be replaced anytime by AI, these studies highlight the importance of how vital and interconnected the human factor is to the effective deployment and development of AI tools.  One could be content with the constant and consistent output AI tools generate, but progress is only possible when competent human minds are involved in the process and direction.  Students can easily find answers with AI tools at their disposal, but why not advance their understanding of how solutions are formed with engaging and relatable AI-powered educational experiences?  High-level human thinking grounded by values and experience can’t be replicated by machines, and perhaps there’s no better time than now to incorporate it into the heart of the AI revolution. 

While AI development hopes that optimization and automation free the human mind to go after bigger and more creative pursuits, we here at Cascade Strategies simply hope that humanity emerges from all of these advancements more and not less than what it was when we entered the AI revolution.

 

 

Additional Reading:

Why AI is no substitute for human teachers – Megan Morrone, Axios

AI Tutors Can Work—With the Right Guardrails – Daniel Leonard, Edutopia  

 

 

Featured Image Copyright: jallen_RTR
Top Image Copyright: danymena88

Read more

Are We Getting Dumber Because of AI?

jerry9789
0 comments
artificial intelligence, Burning Questions

Share

Is Generative AI making us dumber?  Two recent studies suggest so.

A study published early this year titled “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking” showed that growing dependence on AI could lead to a decline in critical thinking.   Submitted by Michael Gerlich of the SBS Swiss Business School, the study was based on surveys and interviews of 666 UK participants from different age groups and academic backgrounds.  The problem is more pronounced with younger participants who demonstrated increased reliance on AI to perform routine tasks and scored lower when it comes to critical thinking than their older counterparts.

More recently, a study by Microsoft and Carnegie Mellon University shared similar findings that the more workers depended on AI for their work, the duller their critical thinking becomes.  The study surveyed 319 knowledge workers who used generative AI at least once a week and examined how and when they apply AI or their critical skills when performing tasks.  The more faith the participant put in genAI to produce acceptable outcome, the less they use their critical thinking skills.  On the other hand, participants who have higher confidence in their abilities than that of AI’s are found to exercise their critical thinking more out of concerns over unintended and overlooked machine output.  

Copyright: Tara Winstead

What is Cognitive Offloading?

Both studies are linking overreliance on AI with cognitive offloading, which is when someone utilizes external tools or processes for completing tasks, resulting in their reduced engagement with deep, reflective thinking.  Yes, AI is improving efficiency and saves time and financial costs, but these studies are suggesting that it could make humans less smart over time.

However, cognitive offloading isn’t new as it existed in a variety of forms throughout time, such as using a calculator instead of performing mental mathematics or simply making a grocery list instead of memorizing all the items you need to buy.  It’s no surprise then that there are questions about the merits of the studies, such as self-reporting bias or how critical thinking was measured.  Forbes suggests that AI isn’t making us dumb but lazy, while another emphasizes that in order for there to be harm to one’s critical thinking abilities, one must have critical thinking to begin with.  

Copyright: Pavel Danilyuk

Rethinking AI Development

Nevertheless, these studies contribute to the conversation regarding the direction of genAI development, now with the nuance of being mindful and respectful of its human users’ intelligence and faculties. Recommendations include rethinking AI designs and processes which incorporates and engages human critical thinking.  They’re helping bring back focus to AI serving as a tool augmenting instead of overtaking human capabilities.  

For us at Cascade Strategies, we’re glad that these studies have renewed awareness and appreciation of human intelligence and creativity.  Our world could’ve easily devolved into settling for more of the same output so it pleases us to learn that more voices are becoming advocates and proponents not only of the “appropriate use of AI” but also of high level human thinking.

Featured Image Copyright: Alex Knight
Top Image Copyright: Tara Winstead

Read more

What It Means to Choose or Decide In The Age of AI

jerry9789
0 comments
artificial intelligence, Burning Questions

Share

Longstanding Concerns Over AI

From an open letter endorsed by tech leaders like Elon Musk and Steve Wozniak which proposed a six-month pause on AI development to Henry Kissinger co-writing a book on the pitfalls of unchecked, self-learning machines, it may come as no surprise that AI’s mainstream rise comes with its own share of caution and warnings. But these worries didn’t pop up with the sudden popularity of AI apps like ChatGPT; rather, concerns over AI’s influence have existed decades long before, expressed even by one of its early researchers, Joseph Weizenbaum.

 

ELIZA

In his book Computer Power and Human Reason: From Judgment to Calculation (1976), Weizenbaum recounted how he gradually transitioned from exalting the advancement of computer technology to a cautionary, philosophical outlook on machines imitating human behavior. As encapsulated in a 1996 review of his book by Amy Stout, Weizenbaum created a natural-language processing system he called ELIZA which is capable of conversing in a human-like fashion. When ELIZA began to be considered by psychiatrists for human therapy and his own secretary interacted with it too personally for Weizenbaum’s comfort, it led him to start pondering philosophically on what would be lost when aspects of humanity are compromised for production and efficiency.

Copyright chenspec (Pixabay)

 

The Importance of Human Intelligence

Weizenbaum posits that human intelligence can’t be simply measured nor can it be restricted by rationality. Human intelligence isn’t just scientific as it is also artistic and creative. He remarked with the following on what a monopoly of scientific approach would stand for, “We can count, but we are rapidly forgetting how to say what is worth counting and why.” 

Weizenbaum’s ambivalence towards computer technology is further supported by the distinction he made between deciding and choosing; a computer can make decisions based on its calculation and programming but it can not ultimately choose since that requires judgment which is capable of factoring in emotions, values, and experience. Choice fundamentally is a human quality. Thus, we shouldn’t leave the most important decisions to be made for us by machines but rather, resolve matters from a perspective of choice and human understanding.

 

AI and Human Intelligence in Market Research

In the field of market research, AI is being utilized to analyze a multitude of data to produce accurate and actionable results or insights.  One such example is deep learning models which, as Health IT Analytics explains, filter data through a cascade of multiple layers.  Each successive layer improves its result by using or “learning” from the output of the previous one.  This means the more data deep learning models process, the more accurate the results they provide thanks to the continuing refinement of their ability to correlate and connect information.

 

While you can depend on the accuracy of AI-generated results, Cascade Strategies takes it one step further by applying a high level of human thinking.  This allows Cascade Strategies to interpret and unravel insights a machine would’ve otherwise missed because it can only decide, not choose.

Take a look at the market research project we performed for HP to help create a new marketing campaign.  As part of our efforts, we chose to employ very perceptive researchers to spend time with worldwide HP engineers as well as engineers from other companies.

 

This resulted in our researchers discovering that HP engineers showed greater qualities of “mentorship” than other engineers.  Yes, conducting their own technical work was important but just as significant for them was the opportunity to impart to others, especially younger people, what they were doing and why what they were doing was important.  This deeper level of understanding led the way for a different approach to expressing the meaning of the HP brand for people and ultimately resulted in the award-winning and profitable “Mentor” campaign.

 

If you’re tired of the hype about AI-generated market research results and would like more thoughtful and original solutions for your brand, choose the high level of intuitive, interpretive, and synthesis-building thinking Cascade Strategies brings to the table.  Please visit https://cascadestrategies.com/ to learn more about Cascade Strategies and more examples of our better thinking for clients.

Read more

Welcome
to Cascade Strategies

A highly innovative, award-winning market research and consulting firm with over 31 years’ experience in the field. Cascade provides consistent excellence in not only the traditional methodologies such as mobile surveys and focus groups, but also in cutting-edge disciplines like Predictive Analytics, Deep Learning, Neuroscience, Biometrics, Eye Tracking, Virtual Reality, and Gamification.
Snap Survey Software