Saturday, November 28, 2015

Store Intercepts - Shopper Satisfaction Tracking Study

Store Intercepts

shopper small In-Store Shopper Satisfaction Tracking Study

An international chain of restaurants and retail stores wanted to:

  1. identify customer reactions to new concepts and services
  2. set up a system for tracking performance on key service metrics.

Gold Research devised an ongoing research effort that:

  • Leverages iGoldSMART™ - our proprietary mobile survey solution, to intercept and gather reactions from customers as they experience new concepts and services.
  • Allows the client to access customer feedback in real-time i.e. as it is being provided.
  • Successfully gathers real-time data from all stores - 60% of which are in areas with no internet.
  • Enables the client to view customer reactions and preferences across 100 different locations in USA & Canada.

Results from this effort are being leveraged by the client to:

  • improve customer retention by focusing on the "key drivers" identified through this research.
  • expand in markets where tested concepts are a "hit."
  • track and report on key metrics in "real-time".

Interested in learning more? Call us at 1-800-549-7170 or send us an email for a free 30-minute consultation on this topic.
Gold Research Inc. has extensive experience in deploying customer intercepts, in-store interviews, and mobile surveys for concept testing, marketing testing, satisfaction research, shopper insights, mystery shopping, and journey mapping or path-to-purchase research. We can also act as an extension of your research team in helping with data processing, analysis, report development, and survey programming. 

Gold Research Inc. Saves Retail Chain $26,000 Store Intercepts and On-Site Surveys

shopper smallGold Research Inc. Saves Retail Chain $26,000 and 3 Months per Year on Store Intercepts and On-Site Surveys 

The Challenge

A national retail chain with 1,000 stores nationwide annually conducts store intercepts on paper at its 100 locations to keep a pulse on their shopper experiences. Prior to working with us, they were spending 3 months to conduct store intercepts, then 1 month to compile, clean and analyze data. It would take 4 months after the store intercepts were completed, before they would see their results. The client was also concerned that data was being “massaged” during cleaning.

The Solution

We came in and implemented our real-time store intercept, and on-site survey solution, in which we put a trained surveyor in each store with a survey iPad. Our surveyors intercepted customers after they had completed their shopping experiences and administered the on-site surveys. Data collected, from each location, was available for the client to see immediately in real-time.

The Results

  • Within just 3 weeks we completed fielding for all 100 stores and submitted a detailed analysis report to the client before end of the month. So we reduced entire project time 75%, or by 3 months.
  • We also solved the data quality issue by making the store intercept data available to client in real time (as it was being entered) so that the client could immediately download the raw on-site survey data files and begin their preliminary analysis.
  • An added bonus was that our store intercept costs came in $26,000 less than what they were used to paying.
  • Above all, the client was thrilled with our on-site survey reporting which was very visual and engaging and enabled them to disseminate insights to internal stakeholders quickly.

Interested in learning more? Call us at 1-800-549-7170 or send us an email for a free 30-minute consultation on this topic.
Gold Research Inc. has extensive experience in deploying customer intercepts, in-store interviews, and mobile surveys for concept testing, marketing testing, satisfaction research, shopper insights, mystery shopping, and journey mapping or path-to-purchase research. We can also act as an extension of your research team in helping with data processing, analysis, report development, and survey programming. 

Wednesday, November 25, 2015

Common Pitfalls In Store Intercept Concept Testing

Concept testing is often a key step in new product development. If not done right, it can produce useless or (even worse) misleading results. Here are the most common pitfalls associated with this type of research and how to avoid them.

[1] Inadequate concept specification.

Problem:  Researchers often test ideas which are not fully specified. When this happens, respondents have trouble answering purchase intent or other overall interest questions. Alternatively, respondents may interpret what’s meant to be the same concept in many different ways, so their answers aren’t comparable.
Solution: Think like a respondent – what would they need to know about the concept in order to make a simulated purchase decision? If you (or your client) can’t provide this information, then the concept may not be ready for testing.

[2] Missing or bad pricing.

Problem:  A special case of inadequate specification is missing or bad pricing. Asking purchase intent when price is unknown isn’t really meaningful, while asking purchase intent when price is unrealistically high or low yields distorted results.
Solution: Conduct some form of pricing research if you (or your client) can’t provide reasonable prices. Alternatively, use the concept test for other purposes (prioritizing a list of options, assessing characteristics or perceptions, etc.).

[3] Surveying the wrong population.

Problem:  Most new products or services are intended for a certain target population. If your sample is too broadly defined, your results will be distorted by the inclusion of opinions from people who you’re not interested in. If your sample is too narrowly defined, your results will be distorted by the exclusion of opinions from people who you are (or should be) interested in.
Solution: Take the time up front to define your target audience in terms of demographics, category behavior, or other characteristics. This definition can then be implemented through sample specifications and/or screening questions. Quotas or weighting may also be needed.

[4] Ignoring key drivers.

Problem:  The main objective of most concept tests is to assess purchase intent. To save money and minimize respondent burden, many researchers stop there. However, this information is of limited value if you don’t know why intent is high or low. In fact, understanding a concept’s strengths and weaknesses is sometimes more useful than estimating overall interest.
Solution: Include questions about concept characteristics. These may be broad or specific, and may be functional or emotional. Then use an appropriate analytical technique to determine the relative importance (impact on overall interest) of these characteristics.

[5] Wrong methodology.

Problem:  Many researchers select their data collection method for concept tests based solely on cost and timing considerations, which usually favor the Internet. However, this may not be the right choice. For example, graphics may not show correctly or effectively, especially on mobile devices. In addition, some target audiences (very young, very old, low income or limited education, etc.) may be hard to reach online or uncomfortable with online surveys.
Solution: Try to choose the methodology which yields the most valid and reliable results, even if it isn’t the fastest or cheapest. Saving time and/or money doesn’t do you any good if it comes at the expense of data quality. Consider using mixed modes, or having paper backups if collecting data in person.

[6] Inadequate sample sizes.

Problem:  Cost considerations force many researchers to go with the smallest sample size that will provide a reasonable level of precision at the aggregate level. However, small samples often won’t support potentially important breakouts by key respondent characteristics. You may also want to limit some questions to certain subsamples, such as people with a minimum level of interest in the concept, and here again small base sizes can be problematic.
Solution: Focus on the smallest subsample for which you want to be able to make statistically meaningful inferences, and let that drive your total sample size. You can sometimes justify the cost of a larger sample by adding questions which address other related business issues, or alternatively by including your concept test as part of another survey.

Gold Research Blogs

Interested in learning more? Call us at 1-800-549-7170 or send us an email for a free 30-minute consultation on this topic.

Gold Research Inc. has extensive experience in deploying customer intercepts, in-store interviews, and mobile surveys for concept testing, marketing testing, satisfaction research, shopper insights, mystery shopping, and journey mapping or path-to-purchase research. We can also act as an extension of your research team in helping with data processing, analysis, report development, and survey programming. 

www.goldresearchinc.com


Using Store Intercepts Or On-Site Surveys To Measure Customer Loyalty

Four Errors Even the Most Experienced Researchers Make and How to Avoid Them

Most businesses want to know how satisfied their customers are, and what can be done to make them even more satisfied. However, if not done right, then store intercepts or on-site surveys used to measure customer satisfaction and loyalty can produce useless or (even worse) misleading results. Here are four errors even the most experienced researchers sometimes make and how to avoid them.

[1] No frame of reference.

Problem:  Too often store intercepts or on-site surveys don’t go beyond overall and detailed (attribute) ratings of the client’s business. This means that researchers have little context for interpreting the results and can’t answer key questions.
Solution: Design your store intercept or on-site survey to provide a frame of reference. Consider adding questions about your performance compared to expectations and/or competitors. Conducting a companion survey of employees (especially sales) with similar questions can also be useful.

[2] No tracking.

Problem:  Many businesses only perform a single intercept or on-site survey study to measure customer satisfaction. There can be logistical, financial or other barriers to repeating this research. In addition, businesses may feel that they’ve achieved their research objectives. However, it’s often valuable to know whether satisfaction is getting better or worse and it’s hard to learn this from a single study. Also, things going on in the marketplace can distort single-study results.
Solution: Track customer satisfaction over time by deploying store intercepts or on-site surveys regularly. The frequency with which you repeat this research depends upon the nature of your customer base (size, churn, purchase frequency, etc.). It may not be necessary to ask every question every time. For example, you could field a full store intercept or on-site survey once a year and a shortened quarterly version – possibly with a smaller sample size - in between.

[3] Poor sampling.

Problem:  Many researchers pay inadequate attention to sampling in measuring customer satisfaction via store intercepts or on-site surveys. Who you ask is at least as important as what you ask and ending up with an unrepresentative or otherwise inadequate sample can reduce the validity and reliability of your results.
Solution: Planning and attention to detail are vital to good sampling in using store intercepts or on-site surveys for customer satisfaction research. We have several recommendations:
• Do not use convenience samples unless absolutely necessary;
• Try to have respondents from all relevant major customer segments (geography, industry, product/service type, tenure, etc.);
• Make sure you obtain enough responses – both overall and within each segment – to support desired precision and breakouts;
• Stay in the field long enough, and check your data while in the field so you don’t hear from only the most satisfied customers, only the least satisfied, or only the two extremes (not the middle);
• In B2B research, pay attention to both the number of accounts selected and the number (and characteristics) of individuals selected within each account; and
• Think about whether you also want input, at least on some questions, from former and/or prospective customers.

[4] Flawed attribute lists.

Problem: Many store intercepts or on-site surveys ask customers to rate companies, brands, or products on a list of attributes. The problem is that these attribute lists are often too long, incomplete, and/or imbalanced. This can reduce overall data quality (respondent fatigue leading to item non-response, lack of variation, etc.). It can also yield misleading results – for example, if your list is incomplete or skewed, what the data suggests is most important may not be what’s truly most important to your customers.
Solution: Take the time to carefully construct your attribute list. Keep it as short as possible (minimize overlap/redundancy), include a mix of functional and emotional attributes (seeking input from multiple sources), use your customers’ vocabulary (e.g., from open-ended responses, interviews, or focus groups), and be consistent in wording and scaling if making comparisons with other/previous research.

Gold Research Blogs

Interested in learning more? Call us at 1-800-549-7170 or send us an email for a free 30-minute consultation on this topic.

Gold Research Inc. has extensive experience in deploying customer intercepts, in-store interviews, and mobile surveys for concept testing, marketing testing, satisfaction research, shopper insights, mystery shopping, and journey mapping or path-to-purchase research. We can also act as an extension of your research team in helping with data processing, analysis, report development, and survey programming. 

www.goldresearchinc.com

Store Intercept And On-Site Survey Research Mantra

Don’t Measure Performance In A Vacuum

Retail, Consumer Packaged Goods (CPG), Restaurant and other organizations of all types often conduct store intercepts or on-site surveys in which they ask their customers, their employees or the general public how well they’re doing. We believe that the value of such store intercepts or on-site surveys is limited unless they also address at least one of the following three topics:

[1] Importance.

While deploying store intercepts or on-site surveys, always remember that knowing how you’re performing on an attribute (quality, service, etc.) is more useful if you know how important that attribute is. If your performance is highest on the least important attribute(s), then priorities – and possibly resources – may need to be shifted.

[2] Expectations.

Another way of providing context for performance data when leveraging store intercepts or on-site surveys is to find out how well your audience expects you to perform. “Good” performance for a budget brand may actually reflect greater relative customer satisfaction than “very good” performance for a luxury brand.

[3] Comparisons.

An alternative to measuring expectations in store intercepts or on-site surveys is to ask how competitors perform on the same attributes. Something which appears to be a strength based on your own performance may in fact be a relative weakness if the competition is doing even better.
There are many ways to capture actionable information on these topics, and the best approach depends upon the circumstances. At Gold Research, we understand that store intercepts and on-site surveys are easy to do, but hard to do well.

Gold Research Blogs

Interested in learning more? Call us at 1-800-549-7170 or send us an email for a free 30-minute consultation on this topic.

Gold Research Inc. has extensive experience in deploying customer intercepts, in-store interviews, and mobile surveys for concept testing, marketing testing, satisfaction research, shopper insights, mystery shopping, and journey mapping or path-to-purchase research. We can also act as an extension of your research team in helping with data processing, analysis, report development, and survey programming. 

www.goldresearchinc.com