What Large Language Models could mean for market research

Siri on steroids or the future of the industry? We discuss the potential applications and limitations of Large Language Models.
31 March 2023
The value of data in market research
jane
Jane
Ostler

EVP Global Thought Leadership, Kantar

Ashok Kalidas
Ashok
Kalidas

Global Head Of Data Science And Innovation, Global Analytics

Get in touch

Unless like Jared Leto, you were at a silent retreat and emerged not realising there was a global pandemic, you will be aware that ChatGPT, Bard and all their cousins are the centre of attention right now. Even Bill Gates has declared that ‘the age of AI has begun’.

But before diving into all the possible radical applications of the technology for the market research industry, let’s start at the beginning: What are Large Language Models? Simply put, Large Language Models (LLMs) are designed to predict the next word or phrase in a sequence. And with lots of exposure to large datasets, these models can learn statistical relationships between words through their co-occurrences. As an example:

What type of milk would you like in your coffee? We have oat, almond, soy and cow.
I just fancied a nice cup of milky coffee.

The words milk and coffee are deemed to be semantically related because they tend to be neighbours. This ‘understanding’ of relationships at a massive scale allows the models to solve tasks at a seemingly surreal level of proficiency – while dropping jaws and prompting non-stop hype about industry disruption on LinkedIn.

However fascinating, this is not really understanding; it’s statistical association. LLMs are not yet sentient beings, and we have not yet arrived at Artificial General Intelligence – the time when a machine will be able to understand or learn intellectual tasks as a human would.

So what are the potential use cases for the market research industry?

There are many exciting and progressive use cases for the market research industry, some of which we already use, and others that we are actively exploring at Kantar. Here are just a few examples:

  • Summarisation: market research collects a lot of data in the form of words – survey verbatims, qualitative interviews, and focus groups. LLMs could summarise, order and prioritise responses expediting the work of the researcher when creating a narrative for the client.
  • Automated reporting: market research also produces large volumes of quantitative data that need sorting, summarising, and presenting. LLMs could quickly organise and create draft headlines based on charts, tables, models, as well as executive summaries.
  • Topic/theme identification: using different attitudinal datasets or open APIs to digital platforms, LLMs could identify themes, assess sentiment, affinity and brand perceptions for the researcher to refine.
  • Prediction: LLMs could extract embeddings (mathematical representations) that other machine learning models can use to predict some outcome of interest. For instance, does the dialogue in a TV ad help predict its performance? How can we relate people’s qualitative experience interacting with a service representative to their brand loyalty or churn?
  • Intelligent interviewing: already in use by the industry, conversational AI will come on in leaps and bounds, responding to previous answers and routing questions accordingly. And designing quant questionnaires will never be the same again, the machine can help with automating and standardising the process!
  • Text data cleaning: cleaning is a large part of the operational process – LLMs could check for gibberish and spelling errors, much better than autocorrect ever did!
  • Creative Writing: this could be anything from creating discussion guides, initial drafts of presentations, marketing copy, and concept statements to [insert your wild idea here].
  • Conversational search queries: think of ’an intelligent agent’ that sits on top of data platforms you can query in natural human language. The agent then analyses potentially massive databases ’underneath’ and fetches back the results in natural language. Siri on steroids!

So what are the risks of LLMs?

There are quite a few known risks that we see in the market research industry. One is that the model starts making things up or ‘hallucinating’. We’ve seen this with time series examples where the previous version of ChatGPT gave wrong answers because it was only updated to 2021. LLMs also have no ‘knowledge of knowledge’, so there’s no such thing as a confidence level. LLMs have no notion of time or temporality, or maths, which is rule-based, so they are currently limited in their interpretation of data to what they can discern through generic correlations or associations.

And there are also obvious legal and ethical issues that arise. Intellectual property, for example: is this a creative act by the LLM, or is it re-hashing someone else’s IP? Does sharing your own data on the open web mean you give permission for it to be used by LLMs? And finally, the quality of the datasets the models use could easily reinforce biases and stereotypes without ‘knowing’.

Our conclusion

Large Language Models offer immense potential to the market research industry. They could disrupt roles and responsibilities while speeding up some processes, enhancing others, and creating new opportunities. But market research and data organisations will need to be assured of their position on the risks before commissioning large-scale projects.

We predict there will be three types of use cases for LLMs in market research:

  1. To make some things more efficient: for example, no more manual coding of open ends.
  2. To do some things better: for example, the ability to process a million tweets, extract emotions and predict, for example, churn - a human can't do that.
  3. Create new opportunities: for example, ask a machine to create 10 versions of a concept, use another machine to evaluate each, and pick the best.

At Kantar, we have a rich history of using language models over the past 10 years across our entire business, and more generally using Machine Learning and AI to enhance many of our products and solutions, including our ad screening solution Link AI. Link AI has a solid foundation of training data of more than 250,000 ads tested with humans. We are also running workstream pilots across new use cases using the latest generative AI models and exploring scalable LLM opportunities with partners. Exciting times. Get in touch to discuss how LLMs could shape the future of the market research industry.

A brief history of language models (Sorry, Hawking!)

We know you know, but in case you don’t really know, parameters are the ’moving parts’ inside a model – the more parameters a model has the more complex and the more data it requires. But how much? That has changed exponentially in the last decade and will change again. You have our word!

Baby steps or phase 1 (circa 2013) – Word embeddings emerge. Machines could now, for the first time, represent each word in the English language as a collection of numbers! And these numbers appeared to capture ’meaning’ – the machine would give a similar embedding for the words ’king’ and ’queen’, for example, that would be distinct from the embedding from the word ’bank’. The early word embeddings relied on simple model architectures with a relatively modest number of ’parameters’ in the region of 30 million to 100 million. While these required further modelling and tuning for use on specific tasks, they revolutionised the field of Text Analytics.

It’s all about context or phase 2 (2014 – 2018) – A problem with early word embeddings was that they represent words without considering ’context’ – the same word can have different meanings depending on the other words in the sentence. Contextual embeddings appear in the scene circa 2018 and can process large strings of text as sequences of words. These models use a larger range of parameters between 100 million and 300 million – and you guessed it, these still require some fine-tuning before they can be used on specifics.

Ready to rock or phase 3 – Today, we see Large Foundation Models trained on huge datasets at scale – including GPT-4, ChatGPT, Bard and their future cousins – operating on over 175 billion parameters. And yes, this is the real deal! All these models are ‘plug and play’; they don’t need much or any additional training to perform specific tasks.

Get in touch
Related solutions
Where human expertise, AI-powered analytics and technology converge to predict consumer behaviour and optimise priorities for growth.
Grow your brand with extraordinary creative. Know how your ad will perform, and how to improve it to maximise ROI.
future beauty skincare
Learn more about this changing industry, and see how to predict future trends quickly with Digital (Dx) Analytics.
Download the report