Cambri's answers to ESOMAR's 22 questions to help buyers of AI based services for market research and insights

Feb 21, 2024


ESOMAR has developed a guide to help buyers of AI based services for market research and insights by listing 22 questions you as a buyer of these services should know and make sure your AI solution supplier can answer. We see this as a very valuable and much needed initiative from ESOMAR. Below you can find our aswers to these 22 questions.

The credentials of the organisation and an overview of what’s on offer.

  1. What experience does your company have in providing AI based solutions for market research? Do you/your organisation and staff have training or qualifications in this field, or relevant adjacent fields?
    • AI has been at the core of Cambri product strategy since the beginning. In 2021 we launched the 1st version of our proprietary NLP solution designed to the context of consumer products and brands. Since our NLP we have been expanding the AI feature set (called Launch AI) that aims to increase the innovation success of consumer brands.
    • In our product and DS team we have data scientists, machine learning and consumer behavior experts who have extensive practical experience and academic merits (PhDs, published scientific articles) in the domain.


  1. Where do you think your AI based services can have a positive impact on the research process? What benefits does it bring and what problems does it address?
    • Our mission is to help consumer brands to bring new products successfully to market.
    • Now our AI feature set has had a huge impact on how powerfully we can execute our mission, i.e. help our clients to get rid of innovation process waste and outcome waste. We have proof that our Launch AI feature helps to increase product launch success from 5-25% up to 70%. We divide our Launch AI feature set into two parts: Predict and Advice.
    • Predict: Our Launch AI model (machine learning model using both pre-and post-launch data) forecasts the launch success likelihood of Cambri-tested concepts. The output is a Launch AI Score which gives an easy and accurate answer to innovation teams: will the tested product concept be a launch success or not, can we safely move forward.
    • Advice complements the Launch AI Score. AI summaries and Launch AI Score drivers give a clear direction and illuminate in-depth what works, and what requires improvement in the tested concept. No more manual labelling or analytics of open-ended answers. Additionally, new AI-generated value propositions feed the creative minds of product innovation teams which new powerful innovation routes to explore further.


  1. What practical problems and issues have you encountered in the use and deployment of AI? What has worked well and how, and what has worked less well and why?
    • Developing AI solutions is a complex journey that requires a clear business problem to be solved, a relevant and coherent dataset, a multi-disciplinary team, and a modern technology infrastructure that allows rapid experimentation and efficient deployment.
    • It is important to have a bold vision but to start to experiment with smaller problems and hypotheses.
    • Models need to be reviewed and updated constantly. If a model is not updated, it will degrade over time and become stale. This further reinforces the need for good processes and infrastructure, so that retraining can be done efficiently and effectively.


Is the AI Capability/Service Transparent, Fit for Purpose, Trustworthy and Ethical?


  1. Can you explain the role of AI in your service offer in a way that can be easily understood by researchers and stakeholders? What are the key functionalities?
    • Our Launch AI feature set helps consumer brands to increase their innovation success.
    • Launch AI Score provides a clear and accurate answer: will the tested concept be a success or not, and thus is it safe to move forward in the innovation process.
    • Launch AI Score drivers provide clear directions and in-depth guidance about what works in the concept and what doesn’t.
    • New AI-generated value propositions improve the tested value propositions based on consumer feedback. We have proof that they resonate better amongst the target audience than the original ones.
    • Our proprietary NLP provides in-depth and comprehensive understanding at a question level on consumer sentiment as deep dive into content analysis topics. Our NLP is designed to the context of consumer products and brands using learnings from academia on how consumers make choices and use products and brands, in order to create value for themselves.
    • Even though we have been working with AI for years, we consider being at the beginning of our AI journey. As we invest heavily on our AI capabilities, our AI feature set evolves continuously.

  2. What is the specific model used? Are your company’s AI solutions primarily developed internally or do they integrate an existing AI system and/or involve a third party; if so which?
    • We use different types of AI models in our Launch AI model feature set from classical machine learning models to Generative AI.
    • We have both models that are developed internally, and also third party partners.

  3. How do the algorithms deployed deliver the desired results? Can you summarise the underlying data used to power your AI service, whether client data is also used in the training data and if they can opt out of this?
    • Our Launch AI model uses both survey data as well as launch success data (in-market performance data) when predicting the launch success likelihood of Cambri-tested concepts.

  4. How do you identify and address bias in the input data sets and the algorithm design and what steps are taken to maximise representative and unbiased insights?
    • Our Launch AI model acknowledges differences in the survey-answering behavior of respondents in different regions and categories.
    • We track metrics across subgroups of surveys, to ensure that our solution is accurate enough in all of them.
    • When we use Generative AI, we contextualize the output by feeding the system relevant contextualized input.
    • We have metrics that we use to verify the accuracy and generalizability of the machine learning and GenAI models that we use.

  5. What are the processes to verify and validate the output for accuracy and are they documented? How do you measure and assess validity? Is there a process to identify and handle cases where the system yields unreliable results?
    • Every AI solution that we release goes through a validation process. Our aim is to provide insights and predictions that can be trusted and have a positive impact on product innovation success.
    • It depends on the AI model which measures and metrics we apply. For example, when evaluating the performance of our NLP we focus on three core metrics: precision, recall, F1 score. When validating the performance of our Launch AI model we track accuracy and generalizability. As to GenAI summaries, we validate the accuracy of the summaries through human verification.

  6. What are the limitations of your AI models and how do you mitigate them?
    • Our AI models are designed to serve our consumer brands in the context of product and brand innovation.
    • Most of our AI features, such as content analysis, sentiment analysis, AI summaries are available for our users and tests across our Cambri platform.
    • The only exception having limitations is our Launch AI Score. The scope of Launch AI Score (predicting the launch success likelihood of tested concepts) is dependent on the scope of its training data. The more regions and categories covered, the bigger its scope.
    • Currently the Launch AI Score is available for Food and Beverages in most of Europe and North America.
    • We will open new regions and categories as we get more training data from our clients and new clients.
    • When we start a new customer partnership we start by feeding the model with that company's historical product launches both to train the model, but also to play back the results to them so they can eveluate the accuracy of the model.

  7. How have you ensured that your service been designed with duty of care in mind?
    • The team who has been working on the Launch AI feature set has roots in both practice and academia, meaning that our standards are very high.
    • The earlier described examples illuminate how serious we are in delivering valid and reliable insights and predictions that help our clients to increase their innovation success. 


How do you provide Human-in-the-Loop Oversight of AI systems?


  1. Transparency, communication clarity and ethical use: How do you ensure that outputs are clearly and consistently identifiable when generated by AI?
    • AI-generated outputs are clearly identifiable in the Cambri platform: users see and know which outputs are generated by AI models, and which are generated by more traditional algorithms, such as a HB model.

  2. How do you ensure that human-defined ethical principles are the governing force behind AI- driven solutions to provide a degree of trust, equity and reality? How do you combine human understanding with AI transparency to enable joint collaboration?
    • We co-create our solutions together with our users and clients from defining the business aim till implementing the AI process to our client processes.
    • We share AI-model metrics and scores to our clients, such as Launch AI model accuracy.
    • Overall, we support transparency because it increases understanding and builds mutual trust.

  3. How do you integrate human expertise with AI to recognise biases and inaccuracies, while enhancing transparency?
    • A good AI solution – meaning that it serves the business purpose and provides reliable and valid outcomes - is an outcome of the teamwork of our human experts. To generate the desired outcomes, we follow a rigorous process as follows.
    • We first define a business problem to be solved and validate it with our clients.
    • We continue by designing a theoretical framework(s) for a hypothetical AI solution(s).
    • We experiment with different modelling paths: feasibility and performance, such as model accuracy. While doing the technical validation, we test and iterate with our clients and platform users how they will use and get the desired benefit from the AI solution.
    • We build the AI solution and deploy our AI models into the production.
    • Finally, we educate our clients and users so that they learn how to benefit from insights and predictions in their product design and decision-making.

  4. Responsible Innovation: How does your AI solution integrate human oversight to ensure ethical compliance?
    • Our AI innovation process as described above has high standards in terms of business impact, validity and reliability.
    • Our AI models don’t use any respondent-level data that is subject to data privacy protection.


What are the Data Governance protocols?


  1. Data quality: How do you assess if the data used for AI models is accurate, complete, and relevant to the research objectives?
    • When planning our AI models, we define a theoretical framework that determines which datapoints we need to use, in order to address a specific business problem and research question. For example, our proprietary NLP solution is built on a taxonomy on how consumers make decisions and create value for themselves with the help of products and brands, based on the learnings from academia.
    • Our models use both survey data and in-market performance data. As to survey data, we have our internal data quality audit measures, in addition to those of our global panel provider partners. As to in-market performance data, expected data outputs are clearly defined.

  2. Data lineage: Do you track the origin and processing of data throughout its lifecycle, from collection to analysis and reporting and are these sources made available?
    • Yes, we can track the data used all the way to the smallest unit of data source, such as respondent level data.

  3. Please provide the link to your privacy notice (sometimes referred to as a privacy policy) as well as a summary of the key concepts it addresses. If your company uses different privacy notices for different products or services, please provide an example relevant to the products or services covered in your response to this question.

  4. Which data protection laws apply and what are the steps that you take to comply with them and implement measures to protect the privacy of respondents? Have you evaluated any risks to the individual as required by privacy legislation and ensured you have obtained consents where necessary?
    • We at Cambri follow EU GPDR policies.
    • Our AI models don’t use any respondent-level data that is subject to data privacy protection.

  5. What steps are taken to ensure AI models are resilient to adversarial attacks, noise, and other potential disruptions? Information security frameworks and standards include, but are not limited to COBIT, HITRUST, ISO 27001, the NIST Cybersecurity Framework and SOC 2.
    • All of our models rely on sanitized inputs, and are in a separate protected environment, controlled by us.

  6. Data ownership: Do you clearly define and communicate the ownership of data, including intellectual property rights and usage permissions?
    • Our international panel provider partners (all ESOMAR members) have their own standard protocols in use towards consumer panelists.
    • The ownership of data, intellectual property rights and usage permissions are covered in our client service agreements.

  7. Do you restrict what can be done with the data?
    • We don’t restrict how our clients and users use the outputs of our AI models. Our clients and users have access to the outputs of our AI solutions that they can further apply in the product development processes.
    • Our approach poses no business (legal) risk towards our clients because the input datapoints don’t violate any regulation, e.g. related to GDPR or intellectual property rights.

  8. Are you clear about who owns the output and inputs?
    • The ownership of data, intellectual property rights and usage permissions are covered in our client service agreements.


Get in touch to learn how you could benefit from AI and iterative testing