• (425) 677-7430
  • info@cascadestrategies.com
Showing posts tagged with: llm

Brand Health Tracking with LLM Equity (Part 1)

jerry9789
0 comments
artificial intelligence, Brand Surveys and Testing, Brandview World

Share

AI Is Disrupting The Shopper’s Experience

There’s a paradigm shift in the shopping process and AI is the driving force behind this change.  Shoppers are no longer just searching online or scrolling through websites; they’ve now taken advantage of AI platforms to discover, compare, and even buy products in their behalf.

According to generative engine optimization (GEO) firm The Rank Collective, their analysis of cross-platform AI visibility data revealed that 64% of consumers are now using AI tools to discover and learn about new products, with frequent online shoppers increasing that share to 66%.  ChatGPT serves as a starting point for 34% of these high-intent users.

Another study based on two multi-market surveys of 5,000 consumers aged 18-67 comprised of US, UK, Canadian and Australian residents reported that 41% of consumers trust Gen AI search results more than paid search results.  That same study- the 2025 Consumer Adoption of AI Report- also found that only 15% trust AI less than search ads.

Additionally, Adyen’s Retail Report shared that 51% of shoppers are open to AI making purchases in their behalf.  It also noted that the number of US shoppers using AI assistants rose from 12% to 35%.  With these encouraging figures, 88% of retailers are considering adopting AI to handle the entire shopping process in the shopper’s behalf, with 56% of them prioritizing this technology for 2026.

Image: Google DeepMind

LLM Equity and Brand Building

AI has opened up a new world of fast and frictionless shopping experience.  While still in its early stages of adoption, companies have begun exploring this new space to understand what challenges it would need to address in order to compete and thrive.

Perhaps a good starting point is understanding Large Language Models (LLM) equity.  LLM equity generally refers to ensuring that AI models are fair, unbiased, and accessible across diverse populations, preventing the reinforcement of existing disparities.  It requires addressing algorithmic bias in training data specifically with race, gender, and socioeconomic status, especially in the field of healthcare.  It’s also concerned with expanding access and at the same time, performing in non-English languages and low-resource settings.

For brand building, LLM equity is more concerned with whether your brand shows up in Gen AI search results and how it’s being represented.  What theme or themes are being represented by your brand?  Are those themes coherently represented in your social media?  Is your current brand representation connecting and engaging with your audience?  Is that connection strong enough to not only move consumers to purchase your product but also engage with your content?  Is your brand content strong enough to capture the interest and be remembered by prospective consumers?

In other words, understanding LLM equity in brand building is understanding and tracking your brand health.

Image: TyliJura

Featured Image: Shoper.pl

Top Image: Nataliya Vaitkevich

Read more

Can AI Replace Human Respondents In Qualitative Research?

jerry9789
0 comments
artificial intelligence, Brand Surveys and Testing, Brandview World, Burning Questions

Share

Like most industries these days, market research is no stranger to AI with its broad applications including the employment of synthetic respondents, which are individual profiles constructed by Large Language Models (LLMs) from real or simulated data.  They offer fast, cheap, and scalable synthetic data that closely mimics how human participants would respond, a boon for quantitative research.  But can synthetic respondents be just as effective in qualitative research?  Can AI-powered profiles fully take over the role of human respondents in market research?  

Image: Diana

Synthetic Respondents and Qualitative Research

L&E Research recently hosted a webinar sharing their findings and observations testing synthetic respondents across a variety of qualitative research tasks.  They shared that AI characteristically produces quick, structured, and consistent surface-level insights.  It does well with detecting macro trends in usage or preferences, concept screening if you need to compare multiple ideas at scale, and spot issues with survey testing.  It is also capable of gap-filling or simulating missing segments from known data, as well as bulk analysis for summarizing large open-ends quickly.  

The key takeaway L&E found is that AI can describe what people do, but it falls short of telling why people do it.  AI fundamentally excels in following patterns, but it would struggle with finding out the emotional driver, the motivation behind certain responses.  AI can match logic but it won’t be able to fill in tone, nuance nor context like human insight and experience can. 

Most AI models are also built on public data and may not have access to knowing how real people would respond to certain questions.  When the engineers tried to influence AI agents in the direction of how real participants would respond, it rejected this notion and firmly stood by the perspective formed from the vastness of public data. 

Additionally, AI can be absolutely and confidently wrong.  Synthetic data can look convincingly human but since AI relies on patterns instead of experience, the air of confidence it puts up doesn’t guarantee accuracy. 

Of course, the hosts added a disclaimer that this is where synthetic respondents are at right now, as no one could tell how things could possibly be so much different in the years to come.  But the continued utilization of AI in market research- or any other industry, for that matter- is inevitable thanks to the operational and executionary efficiency it grants, and that is enough reason to continue studying and developing synthetic respondents.  

Image: Ron Lach

Why The Human Factor Matters

In market research, emotions matter and context counts.  AI can prove to be a powerful partner but it is no replacement for lived insight or validation.  Human researchers are simply going to remain essential. 

AI’s inherent structure and consistency is representative of its pursuit of perfection; however, humans aren’t perfect, nor simple.  Humans are emotional and oftentimes, irrational.  AI participants would respond based on their perfect approximation of how a human being would, but the synthetic logic behind that would be narrower and more consistent, as it discounts the fact that humans are imperfect. 

Humans also bring incredible complexity and a broader range of perception to the table.  We can contradict ourselves, and this would be natural.  One human participant’s perception and experiences could inform the difference in how they respond from the next, while synthetic data would be uniformly shaped by congruence and invariability, no matter how much effort or work is put into making AI come close to mimicking humanlike responses. 

The complexity, variability, and randomness of human nature is desirable in qualitative research.  The engineers recognized this and cautioned about overly guiding or influencing randomness in AI that it “will hard-code your picture of randomness to the point where it is no longer random.” 

AI can quickly give you bulk analysis but you might not want to rush in bringing it to your stakeholders, as they would question and challenge the quality and reliability of synthetic data.  Human insight continues to be vital and irreplaceable when it comes to trust, nuance, and real-world complexity in market research. 

Image: Kathrine Birch

The Hybrid Approach

At the end of it all, the hosts made a point that the webinar wasn’t meant to scare people away from synthetic data but rather bring a valid conversation on when it makes sense to take advantage or steer clear of AI-generated personas.  In fact, they recommended utilizing a hybrid approach of employing virtual respondents and recruiting human participants, striking a delicate balance between synthesis and empathy. 

Synthetic data would be great during the early exploratory stages of market research when you want to get an initial pulse check, something quick and good enough before getting people involved.  But once you’re at the point when you need to uncover the emotional driver behind responses and decisions, understand or predict behaviors, or even gain a bit more confidence and trust in your findings, that’s when you bring in your human respondents. 

This all aligns not only with a recent growing trend of companies coming around from the AI hype of the last few years but also with our stance on the appropriate use of AI, where we advocate for the responsible and ethical use of artificial intelligence.  Instead of handing AI complete reins over all aspects of a business- or in this case, all stages of research work- we at Cascade Strategies encourage the thoughtful and practical application of artificial intelligence in combination with or enhanced by human experience, values and discretion. 

To find out how our brand of inspired and enlightened human thinking can help you with your market research needs, please contact us here.

Additional Reading:

Can Synthetic Respondents Take Over Surveys?

Featured Image: Darlene Anderson
Top Image: Michelangelo Buonarroti

Read more

Can Synthetic Respondents Take Over Surveys?

jerry9789
0 comments
artificial intelligence, Burning Questions

Share

 

What Are Synthetic Respondents?

AI has increased operational efficiency by streamlining knowledge bases and shortcutting processes so it’s no surprise people and companies are looking for more ways for its application.  For market research, one curious consideration is whether it could take over surveys, essentially by replacing actual respondents with synthetic respondents. 

Also known as virtual respondents, digital personas, and Virtual Audiences, synthetic respondents are individual profiles constructed by Large Language Models (LLMs) from real or simulated data.  Ideally, the data or descriptions used to generate these profiles come from previously conducted surveys and are combined with individual-level demographics, attitudes and behaviors. 

Using these synthetic respondents over real respondents could benefit your research with speed, accuracy and cost savings, at least according to their advocates.  Basically, you just need to conduct one survey and from the profile description or data you gathered from the actual respondents, you’re able to generate results from the constructed individuals over and over for succeeding studies and research. 

Testing Synthetic Respondents

While synthetic respondents could accurately represent real respondents, relying exclusively on the results from these AI-based individuals may not be entirely beneficial.  A webinar hosted by Radius Global took a closer look at the potential of AI-generated synthetic respondents through three case studies of quantitative concept testing, quantitative communications research, and qualitative communications research. 

Aggregate results for the concept tests involving game controllers indicate somewhat strong similarities between the results of the real and synthetic respondents.  This extends to the results from the quantitative communications research when it comes to the believability of statements on the benefits of milk, although there were some differences.  The differences were much more pronounced though when it comes to surprise over the same statements, and there was incongruence when considering how each statement could possibly increase milk consumption. 

The qualitative communications research was seeking in-depth insights into women’s needs, perceptions, and preferences for running a race or marathon, with the feedback gathered meant to be used for creative content.  Personas were constructed from the profiles of six women aged between 18 and 64 years old who ran at least once in an average week.  They had an LLM assume each persona to allow a comparison between findings from real participants to synthetic respondents. 

They found that while both real and synthetic respondents have somewhat similar responses when it comes to functional aspects as goals for women in general pursuing fitness, the AI responses lacked emotional expressions.  There are also little differences in the synthetic respondents’ responses despite having different profiles, and there was even a lack of subtle differences. 

As for concerns among women who are aspirational marathon runners, the synthetic personas were consistent in their responses while the real respondents provided more nuances, variety, and perspectives more prevalent among women. 

Synthetic Respondents vs. Real Respondents

Synthetic respondents appear to be useful if you’re evaluating existing ideas and concepts; however, if you’re looking for “breakthroughs” or essentially new insights you would’ve never arrived at had you not performed the case study or research, you would need to engage with real respondents, relying exclusively on their results or combining them with that of synthetic respondents.  Yes, there could be cases where synthetic respondents could be used, but the results must be extensively validated.  It would also require increasing the efficiency of how data used to construct these individuals are analyzed in addition to enhancing the quality of the data and information gathered for these profiles through thorough screening, intelligent probing, and smart choice models.  

There is a place for synthetic respondents in market research, but as another tool in a researcher’s toolbox.  They won’t be taking over surveys or replacing actual respondents wholesale anytime soon, it seems, as that elusive “Eureka” moment researchers seek is inherently tied to the nuances and perspectives of human emotion and experience you simply can’t construct.  

Photo courtesy of Pavel Danilyuk

Read more

“Distillation” Is Shaking Up The AI Industry

jerry9789
0 comments
artificial intelligence, Brandview World

Share


Paradigm Shift

We’ve recently written about recent AI advancements and popularity, particularly generative AI like that of ChatGPT, driving renewed demand for data centers not seen in decades.  This surging demand pushed tech investors to put $39.6 billion into data center development in 2024, which is 12 times the amount invested back in 2016.

A recent development, however, has stirred things up, especially the concept that billions of dollars needed to be spent for AI advancement.  Developed by a Chinese AI research lab, an open-source large language model named DeepSeek was released and performed on par with OpenAI, but it reportedly operates for just a fraction of the cost of Western AI models.  OpenAI, however, is investigating if DeepSeek utilized distillation of the former’s AI models to develop their systems.

Copyright: cottonbro studio

What Is “Distillation?”

According to Labelbox, model distillation (or knowledge distillation) is a machine learning technique involving the transfer of knowledge from a large model to a smaller one.  Distillation bridges the gap between computational demand and the cost for training large models while maintaining performance.  Basically, the large model learns from an enormous amount of raw data for a number of months and a huge sum of money typically in a training lab, then passes on that knowledge to its smaller counterpart primed for real-world application and production for less time and money.  

Distillation has been around for some time and has been used by AI developers, but not to the same degree of success as DeepSeek.  The Chinese AI developer had said that aside from their own models, they also distilled from open-source AIs released by Meta Platforms and Alibaba.

However, the terms of service for OpenAI prohibits the use of its models for developing competing applications.  While OpenAI had banned suspected accounts for distillation during its investigation, US President Donald Trump’s AI czar David Sacks had called out DeepSeek for distilling from OpenAI models.  Sacks added that US AI companies should take measures to protect their models or make it difficult for their models to be distilled.

Copyright: Darlene Anderson

How Does Distillation Affect AI Investments?

On the back of DeepSeek’s success, distillation might give tech giants cause to reexamine their business models and investors to question the amount of dollars they put into AI advancements.  Is it worth it to be a pioneer or industry leader when the same efforts can be replicated by smaller rivals at less cost?  Can an advantage still exist for tech companies that ask for huge investments to blaze a trail when others are too quick to follow and build upon the leader’s achievements?

A recent Wall Street Journal article notes that tech executives expect distillation to produce more high-quality models.  The same article mentions Anthropic CEO Dario Amodei blogging that DeepSeek’s R1 model “is not a unique breakthrough or something that fundamentally changes the economics” of advanced AI systems.  This is an expected development as the costs for AI operations continue to fall and models move towards being more open-source.  

Perhaps that’s where the advantage for tech leaders and investors lies: the opportunity to break new ground and the understanding that you’re seeking answers from unexplored spaces while the rest limit themselves and reiterate within the same technological confines.  Established tech giants continue to enjoy the prestige of their AI models being more widely used in Silicon Valley — despite DeepSeek’s economical advantage — and the expectation of being the first to bring new advancements and developments to the digital world.

And maybe, just maybe, in that space between the pursuit of new AI breakthroughs and lower-cost AI models lie solutions to help meet the increasing demand for data centers and computing power.   

Copyright: panumas nikhomkhai
Featured Image Copyright: Matheus Bertelli
Top Image Copyright: Airam Dato-on

Read more

Welcome
to Cascade Strategies

A highly innovative, award-winning market research and consulting firm with over 31 years’ experience in the field. Cascade provides consistent excellence in not only the traditional methodologies such as mobile surveys and focus groups, but also in cutting-edge disciplines like Predictive Analytics, Deep Learning, Neuroscience, Biometrics, Eye Tracking, Virtual Reality, and Gamification.
Snap Survey Software