The Strategic and Tactical Roles of Customer Surveys

Customer experience (CX) improvement efforts rely heavily on the use of customer feedback. While there are many different methods of collecting this feedback, customer surveys remain a popular choice among CX professionals. In this post, I will discuss how senior executives can use relationship and transactional surveys to get the information they need to make better strategic and tactical decisions.

Relationship Surveys

Relationship surveys allow customers to indicate their satisfaction about their overall relationship with the company/brand. Relationship surveys are typically administered at regularly scheduled times (e.g., annually) and ask the customers to indicate their loyalty toward and satisfaction with the company across several business areas (e.g., product, service) over a non-trivial time period (6-12 months).

Relationship survey questions fall into four categories:

  1. Customer Loyalty – survey questions reflect three general types of customer loyalty: advocacy, purchasing and retention
  2. Customer satisfaction with the customer experience – survey questions reflect broad touch points across the customer journey (e.g., product, technical support, communication)
  3. Relative Performance – survey questions reflect your ranking against the competition
  4. Open-ended – survey questions uncover reasons behind the ratings

Relationship Surveys Tell You Where to Improve

Relationship-level surveys focus on understanding different CX touch points that drive customer loyalty. By design, the relationship survey assesses multiple CX touch points. Data from relationship-level surveys allows executives to prioritize which touch points contribute most to customer loyalty. CX areas in which customers are dissatisfied and drive (highly predictive of) customer loyalty are referred to as key drivers. Improving customer satisfaction in these key drivers will lead to increases in customer loyalty.

The customer relationship survey helps executives identify where in their business they need to make improvements occur. The relationship survey, however, is less useful in helping executives understand what they need to do to make improvements occur (e.g., I know I need to improve tech support quality but what do I do to improve it?). As a start, you can cull insights from dissatisfied customers’ open-ended comments about a given touch point. This method provides a good starting point for generating a list of reasons why they are unhappy, however, we need to prioritize those reasons. We do this using the transactional survey.

Transactional Surveys

Transactional surveys let customers indicate their satisfaction with a specific event/transaction/interaction with the company, typically revolving around a specific customer touch point (e.g., sales process, product quality, support quality, communication). Unlike relationship surveys, transactional surveys are administered immediately or soon after the customer had a specific interaction with the company (e.g., support, sales, product).

While a business uses one customer relationship survey, that same business could employ multiple transactional surveys, each addressing a specific interaction or touch point. Transactional survey questions will generally fall into four categories:

  1. Overall Satisfaction with the Event/Transaction/Interaction – One survey question reflects customers’ overall evaluation of their experience
  2. Customer satisfaction with customer experience – survey questions reflect specific touch points about the specific experience. These questions can be guided by the results of the review of open-ended comments from the customer relationship survey as well as the results of your journey-mapping exercise.
  3. Relative Performance – survey questions reflect your performance ranking against the competition
  4. Open-ended – survey questions uncover reasons behind the ratings

Transactional Surveys Tell you How to Improve

The results of the relationship survey (e.g., identifying where to make improvements) will dictate which transactional survey you need to conduct. I recommend conducting transactional surveys on processes/touch points that were identified as key drivers from your relationship survey results (e.g., CX areas that didn’t score high on customer satisfaction and are important to driving loyalty.

Unlike the relationship survey where our focus was to understand the comprehensive customer experience over time, transactional surveys are focused on a specific interaction/touch point. The focus of a transactional survey is on achieving a deeper understanding of what aspect of the experience left them dissatisfied. If the relationship survey identified “technical support” as a key driver, a transactional survey on technical support would help identify the specific ways you can improve technical support quality to improve satisfaction with technical support.

The transactional survey helps executives identify how to make improvements happen.


Figure 1.  Using Relationship and Transactional Surveys in your CX Improvement Efforts
Figure 1. Using Relationship and Transactional Surveys in your CX Improvement Efforts

When you think about customer relationship and transactional surveys, it’s best to think of them as complementary efforts in your quest to improve how you do business. Your relationship survey helps you understand where you need to make improvements (e.g., product, service, marketing) while transactional surveys help you identify what needs to be done to improve those experiences. In other words, relationship surveys provide information to help with CX strategic decisions (e.g., what areas of the business you need to improve); transactional surveys provide information to help with tactical decisions (e.g., how are you going to make CX improvements happen). Figure 1 summarizes how relationship surveys and transactional surveys fit into CX strategic and tactical decision-making.

Conduct the relationship survey first and let those results guide your decision about which transactional survey you need to undertake. Using both survey methods provides necessary insights to help you make better strategic and tactical decisions to enhance the customer experience, improve customer loyalty and drive business success.

A version of this article first appeared on CustomerThink.


Investing in Big Data by Bill Pieroni

The average adult makes 70 conscious decisions a day, or more than 25,000 a year. Many of those decisions are inconsequential to organizations, but a few of them can create substantial opportunities or problems. While organizations cannot prevent bad decisions from being made, firms can minimize the risk by investing in data and analytics capabilities. Data and analytics isn’t a new concept. It has been formed over the last century with the aid of two key macroeconomic trends. The first was the migration of the workforce from labor-intensive to knowledge-intensive roles and industries. The second was the introduction of decision-support systems into organizations in the 1960s. As an increased number of knowledge workers began to interact with more powerful technologies and accompanying data stores, analytics began to take a more critical role within organizational decision-making and execution.

However, firms initially had some difficulties incorporating data and analytics into their operations. They gathered a limited number of variables and stored them in multiple data stores with different formats and structures. Additionally, filtering the data to validate what is relevant and impactful, or the signal, from the noise became difficult as the amount of data increased exponentially. Based on a study conducted by  IDC, an IT consultancy, the amount of data available globally grew 27-fold to approximately 2.8 trillion gigabytes from 2005 to 2012. The study also noted that roughly 25% of this data is useful, but only 3% of it has been tagged for leverage and only 0.5% of it is currently analyzed.

Contributed in The Big Analytics: Leader’s Collaborative Book Project Download your FREE copy at TheBigAnalytics

Source by thebiganalytics

Tips to Help Get the Most Value from Data Analytics and Database Support

Given the huge amount of data that is being generated today from multiple sources, businesses must learn how to properly analyze this data else it will be useless. However, extracting value from a variety of data types requires adopting diverse sets of data analytic techniques. Good analytics can help improve the performance of a business. But how exactly do you get started with big data analytics? This post looks at some of the best practices that will help achieve success with big data analytics.

Have a business problem in mind

Exploring enormous amounts of data using advanced analytics tools can be fun but this can also be a waste of time and resources for your team if the end results don’t end up translating into something that can help your company solve a problem. This is why before you get started with big data analytics, the first thing you should do is identify the plight of your business.

You have to know the problems that big data analytics can help you solve. This simply means that even before you start considering data analytics, you must make sure that you acquire the right data. For example, the most important source of data for most businesses is consumer transactions. This will give you structured data. Speeches and videos will give you unstructured data which might not even be relevant to your organization.

The rule of thumb here is, before you start with data analytics find out what kind of business challenge or problem you can address with the data you have. You also need to make sure that the data that you end up analyzing is not only accurate but also current and one that offers real insight. You will need reliable database support if you want to deal with quality content at all times.

Focus on deployment

To be able to achieve real value, you have to operationalize the results of big data analytics. The last thing you want is to abandon a project midway. The cost will be immense. The right selection of data is essential. Some data may not be available while other sources may be too expensive to use. There are also industry regulations to keep up with in data gathering. The analytic development team has to consider how the models they choose will be published and used by the customer service, marketing, operation and product development teams. A streamlined analytical method will save you time, money and make analytics easier.

Leverage on innovation in analytics

Keeping up with the trends in big data analytics is a must. It is important that you invest in the right analytics tools and make sure that you are up-to-date with the data processing techniques being used by other analysts. The right tools and infrastructure will make your work easier and the analytics results more valuable.

There is much more that you need to do including leveraging on cloud services, balancing automation with expertise and embracing analytic diversity. When it comes to database management and data analytics, you have to keep learning.

Author Bio:

Sujain Thomas is a data IT professional who works closely with DBA experts to provide her clients with fantastic solutions to their data problems. If you need data IT solutions, she is the person for the job.

Source: Tips to Help Get the Most Value from Data Analytics and Database Support

10 Things to Know about the Technology Acceptance Model

A usable product is a better product.

But even the most usable product isn’t adequate if it doesn’t do what it needs to.

Products, software, websites, and apps need to be both usable and useful for people to “accept” them, both in their personal and professional lives.

That’s the idea behind the influential Technology Acceptance Model (TAM). Here are 10 things to know about the TAM.

1. If you build it, will they come? Fred Davis developed the first incarnation of the Technology Acceptance Model over three decades ago at around the time of the SUS. It was originally part of an MIT dissertation in 1985. The A for “Acceptance” is indicative of why it was developed. Companies wanted to know whether all the investment in new computing technology would be worth it. (This was before the Internet as we know it and before Windows 3.1.) Usage would be a necessary ingredient to assess productivity. Having a reliable and valid measure that could explain and predict usage would be valuable for both software vendors and IT managers.

2. Perceived usefulness and perceived ease of use drive usage. What are the major factors that lead to adoption and usage? There are many variables but two of the biggest factors that emerged from earlier studies were the perception that the technology does something useful (perceived usefulness; U) and that it’s easy to use (perceived ease of use; E). Davis then started with these two constructs as part of the TAM.

Figure 1: Technology Acceptance Model (TAM) from Davis, 1989.

3. Psychometric validation from two studies. To generate items for the TAM, Davis followed the Classical Test Theory (CTT) process of questionnaire construction (similar to our SUPR-Q). He reviewed the literature on technology adoption (from 37 papers) and generated 14 candidate items each for usefulness and ease of use. He tested them in two studies. The first study was a survey of 120 IBM participants on their usage of an email program, which revealed six items for each factor and ruled out negatively worded items that reduced reliability (similar to our findings). The second was a lab-based study with 40 grad students using two IBM graphics programs. This provided 12 items (six for usefulness and six for ease).

       Usefulness Items

1. Using [this product] in my job would enable me to accomplish tasks more quickly.
2. Using [this product] would improve my job performance.*
3. Using [this product] in my job would increase my productivity.*
4. Using [this product] would enhance my effectiveness on the job.*
5. Using [this product] would make it easier to do my job.
6. I would find [this product] useful in my job.*

       Ease of Use Items

7. Learning to operate [this product] would be easy for me.
8. I would find it easy to get [this product] to do what I want it to do.*
9. My interaction with [this product] would be clear and understandable.*
10. I would find [this product] to be flexible to interact with.
11. It would be easy for me to become skillful at using [this product].
12. I would find [this product] easy to use.*

*indicate items that are used in later TAM extensions

4. Response scales can be changed. The first study described by Davis used a 7-point Likert agree/disagree scale, similar to the PSSUQ. For the second study, the scale was changed to a 7-point likelihood scale (from extremely likely to extremely unlikely) with all scale points labeled.

Figure 2: Example of the TAM response scale from Davis, 1989.

Jim Lewis recently tested (in press) four scale variations with 512 IBM users of Notes (yes, TAM and IBM have a long and continued history!). He modified the TAM items to measure actual rather than anticipated experience (see Figure 3 below) and compared different scaling versions. He found no statistical differences in means between the four versions and all predicted likelihood to use equally. But he did find significantly more response errors when the “extremely agree” and “extremely likely” labels were placed on the left. Jim recommended the more familiar agreement scale (with extremely disagree on the left and extremely agree on the right) as shown in Figure 3.

Figure 3: Proposed response scale change by Lewis (in press).

5. It’s an evolving model and not a static questionnaire. The M is for “Model” because the idea is that multiple variables will affect technology adoption, and each is measured using different sets of questions. Academics love models and the reason is that science relies heavily on models to both explain and predict complex outcomes, from the probability of rolling a 6, gravity, and human attitudes. In fact, there are multiple TAMs: the original TAM by Davis, a TAM 2 that includes more constructs put forth by Venkatesh (2000) [pdf], and a TAM 3 (2008) that accounts for even more variables (e.g. subjective norm, job relevance, output quality, and results demonstrability). These extensions to the original TAM model show the increasing desire to explain the adoption (or lack thereof) of technology and to define and measure the many external variables. One finding that has emerged across multiple TAM studies has been that usefulness dominates and ease of use functions through usage. Or as Davis said, “users are often willing to cope with some difficulty of use in a system that provides critically needed functionality.” This can be seen in the original model of TAM in Figure 1 where ease of use operates through usefulness in addition to usage attitudes.

6. Items and scales have changed. In the development of the TAM, Davis winnowed the items from 14 to 6 for the ease and usefulness constructs. The TAM 2 and TAM 3 use only four items per construct (the ones with asterisks above and a new “mental effort” item). In fact, another paper by Davis et al. (1989) also used only four. There’s a need to reduce the number of items because as more variables get added, you have to add more items to measure these constructs and having an 80-item questionnaire gets impractical and painful. This again emphasizes the TAM as more of a model and less of a standardized questionnaire.

7. It predicts usage (predictive validity). The foundational paper (Davis, 1989) showed a correlation between the TAM and higher self-reported current usage (r = .56 for usefulness and r = .32 for ease of use), which is a form of concurrent validity. Participants were also asked to predict their future usage and this prediction had a strong correlation with ease and usefulness in the two pilot studies (r = .85 for usefulness and r = .59 for ease). But these correlations were derived from the same participants at the same time (not a longitudinal component) and this has the effect of inflating the correlation. (People say they will use things more when they rate them higher.) But another study by Davis et al. (1989) actually had a longitudinal component. It used 107 MBA students who were introduced to a word processor and answered four usefulness and four ease of use items; 14 weeks later the same students answered the TAM again and self-reported usage questions. Davis reported a modest correlation between behavioral intention and actual self-reported usage (r = .35). A similar correlation was validated by explaining 45% of behavioral intention, which established some level of predictive validity. Later studies by Venkatesh et al. (1999) also found a correlation of around r = .5 between behavioral intention and both actual usage and self-reported usage.

8. It extends other models of behavioral prediction. The TAM was an extension of the popular Theory of Reasoned Action (TRA) by Ajzen and Fishbein but applied to the specific domain of computer usage. The TRA is a model that suggests that voluntary behavior is a function of what we think (beliefs), what we feel (attitudes), our intentions, and subjective norms (what others think is acceptable to do). The TAM posits that our beliefs about ease and usefulness affect our attitude toward using, which in turn affects our intention and actual use. You can see the similarity in the TRA model in Figure 4 below compared to TAM in Figure 1 above.

Figure 4: The Theory of Reasoned Action (TRA), proposed by Ajzen and Fishbein, of which the TAM is a specific application for technology use.

9. There are no benchmarks. Despite its wide usage, there are no published benchmarks available on TAM total scores nor for the usefulness and ease of use constructs. Without a benchmark it becomes difficult to know whether a product (or technology) is scoring at a sufficient threshold to know whether potential or current users find it useful (and will adopt it or continue to use it).

10. The UMUX-Lite is an adaptation of the TAM. We discussed the UMUX-Lite in an earlier article. It has only two items which offer similar wording to items in the original TAM items: [This system’s] capabilities meet my requirements (which maps to the usefulness component), and [This system] is easy to use (which maps to the ease component). Our earlier research has found even single items are often sufficient to measure a construct (like ease of use). We expect the UMUX-Lite to increase in usage in the UX industry and help generate benchmarks (which we’ll help with too!).

Thanks to Jim Lewis for providing a draft of his paper and commenting on an earlier draft of this article.

(function() {
if (!window.mc4wp) {
window.mc4wp = {
listeners: [],
forms : {
on: function (event, callback) {
event : event,
callback: callback

Sign-up to receive weekly updates.


Source: 10 Things to Know about the Technology Acceptance Model by analyticsweek

IBM Invests to Help Open-Source Big Data Software — and Itself

The IBM “endorsement effect” has often shaped the computer industry over the years. In 1981, when IBM entered the personal computer business, the company decisively pushed an upstart technology into the mainstream.

In 2000, the open-source operating system Linux was viewed askance in many corporations as an oddball creation and even legally risky to use, since the open-source ethos prefers sharing ideas rather than owning them. But IBM endorsed Linux and poured money and people into accelerating the adoption of the open-source operating system.

On Monday, IBM is to announce a broadly similar move in big data software. The company is placing a large investment — contributing software developers, technology and education programs — behind an open-source project for real-time data analysis, called Apache Spark.

The commitment, according to Robert Picciano, senior vice president for IBM’s data analytics business, will amount to “hundreds of millions of dollars” a year.

Photo courtesy of Pingdom via Flickr
Photo courtesy of Pingdom via Flickr

In the big data software market, much of the attention and investment so far has been focused on Apache Hadoop and the companies distributing that open-source software, including Cloudera, Hortonworks and MapR. Hadoop, put simply, is the software that makes it possible to handle and analyze vast volumes of all kinds of data. The technology came out of the pure Internet companies like Google and Yahoo, and is increasingly being used by mainstream companies, which want to do similar big data analysis in their businesses.

But if Hadoop opens the door to probing vast volumes of data, Spark promises speed. Real-time processing is essential for many applications, from analyzing sensor data streaming from machines to sales transactions on online marketplaces. The Spark technology was developed at the Algorithms, Machines and People Lab at the University of California, Berkeley. A group from the Berkeley lab founded a company two years ago, Databricks, which offers Spark software as a cloud service.

Spark, Mr. Picciano said, is crucial technology that will make it possible to “really deliver on the promise of big data.” That promise, he said, is to quickly gain insights from data to save time and costs, and to spot opportunities in fields like sales and new product development.

IBM said it will put more than 3,500 of its developers and researchers to work on Spark-related projects. It will contribute machine-learning technology to the open-source project, and embed Spark in IBM’s data analysis and commerce software. IBM will also offer Spark as a service on its programming platform for cloud software development, Bluemix. The company will open a Spark technology center in San Francisco to pursue Spark-based innovations.

And IBM plans to partner with academic and private education organizations including UC Berkeley’s AMPLab, DataCamp, Galvanize and Big Data University to teach Spark to as many as 1 million data engineers and data scientists.

Ion Stoica, the chief executive of Databricks, who is a Berkeley computer scientist on leave from the university, called the IBM move “a great validation for Spark.” He had talked to IBM people in recent months and knew they planned to back Spark, but, he added, “the magnitude is impressive.”

With its Spark initiative, analysts said, IBM wants to lend a hand to an open-source project, woo developers and strengthen its position in the fast-evolving market for big data software.

By aligning itself with a popular open-source project, IBM, they said, hopes to attract more software engineers to use its big data software tools, too. “It’s first and foremost a play for the minds — and hearts — of developers,” said Dan Vesset, an analyst at IDC.

IBM is investing in its own future as much as it is contributing to Spark. IBM needs a technology ecosystem, where it is a player and has influence, even if it does not immediately profit from it. IBM mainly makes its living selling applications, often tailored to individual companies, which address challenges in their business like marketing, customer service, supply-chain management and developing new products and services.

“IBM makes its money higher up, building solutions for customers,” said Mike Gualtieri, a analyst for Forrester Research. “That’s ultimately why this makes sense for IBM.”

To read the original article on The New York Times, click here.

Source by analyticsweekpick

The Difference Between Big Data and Smart Data in Healthcare

“Physicians are baffled by what feels like the ‘physician data paradox,’” Slavitt said earlier this spring.

“They are overloaded on data entry and yet rampantly under-informed. And physicians don’t understand why their computer at work doesn’t allow them to track what happens when they refer a patient to a specialist when their computer at home connects them everywhere.”

Spotty health information exchange and insufficient workflow integration are two of the major concerns when it comes to accessing the right data at the right time within the EHR.

A new survey from Quest Diagnostics and Inovalon found that 65 percent of providers do not have the ability to view and utilize all the patient data they need during an encounter, and only 36 percent are satisfied with the limited abilities they have to integrate big data from external sources into their daily routines.

On the surface, it appears that more data sharing should be the solution.  If everyone across the entire care continuum allows every one of its partners to view all its data, shouldn’t providers feel more equipped to make informed decisions about the next steps for their patients?

Yes and no.  As the vast majority of providers have already learned to their cost, more data isn’t always better data – and big data isn’t always smart data.  Even when providers have access to health information exchange, the data that comes through the pipes isn’t always very organized, or may not be in a format they can easily use.

“We’re going through very profound business model changes in healthcare right now, and providers are targeting  processes that will help them with the transition from volume to value.”

Scanning through endless PDFs recounting ten-year-old blood tests and x-rays for long-healed fractures won’t necessarily help a primary care provider diagnose a patient’s stomach ailment or figure out why they are reacting negatively to a certain medication.

Actionable insights are the key to using big data analytics effectively, yet they are as rare and elusive as a patient who always takes all her medications on time and never misses a physical. (more…)

Source by analyticsweek