Oracle introduces big data-infused marketing services

Oracle has unveiled new services around customer service and prospect marketing.

Oracle this week is launching three new services, applying cloud technology to help organizations use big data to market their products and services more effectively.

One new service provides a wealth of information that can be used to generate sales leads. Another service focuses on getting more insight from customer feedback. The third service helps educational institutions get in better touch with their students.

The Oracle Data as a Service for Marketing offers a list of 300 million profiles of business users and companies, which could be used by B2B (business to business) firms to hunt for new customers as well as to develop a better understanding of their potential customer base.

Customers can learn about the types of employees each company on the list has, as well as the sales volume of the company, the age of the company and other pertinent factors.

Oracle developed the profile list in collaboration with a number of large business customer data providers, including Dun & Bradstreet and Madison Logic.

Oracle customers can access the list through the Oracle Data Management Platform, which is part of the Oracle Marketing Cloud.

Oracle also has a new service for getting more information from customer feedback. The service, called the Oracle Data as a Service for Customer Intelligence, is designed to give executives a better picture of what customers think of their products and services. It uses information collected by companies using the service as well as public information from 700 million social networking messages that Oracle collects each day. The service analyzes content across 20 different languages.

The Customer Intelligence service is also designed to provide executives with early insight into emerging trends or growing concerns among their customer base.

The Oracle Marketing Cloud for Student Engagement features a set of templates that can aid a university or learning institution run a marketing campaign to attract more students, as well as to improve the retention rate of those already enrolled. One early customer of the program has been the Canadian University of New Brunswick.

The Oracle Marketing Cloud for Student Engagement combines into a single, focused package a number of existing Oracle cloud services, including Oracle’s Content Marketing, Social Relationship Management, Customer Relationship Management and AppCloud.

Oracle Marketing Cloud for Student Engagement is one of a number of Oracle cloud marketing services developed for specific industries. Other packages focus on sports, entertainment, manufacturing, insurance, asset management, nonprofits, life sciences and wealth management.

The new services were announced in conjunction with the Alliance Conference for Oracle application users, held this week in Nashville.

Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab’s e-mail address is Joab_Jackson@idg.com.

Originally posted via “Oracle introduces big data-infused marketing services”

Originally Posted at: Oracle introduces big data-infused marketing services by analyticsweekpick

Rise of Data Capital by Paul Sonderegger

Data capital will replace big data as the big topic of boardroom conversation. This change will have a big effect on competitive strategy, and CEOs need to make decisions and prepare to talk to their boards about their plans. To succeed, CEOs need to embrace the idea that data is now a kind of capital as vital as financial capital to the development of new products, services, and business processes. The implications are far greater than the spread of fact-based decision-making through better analytics. In some cases, data capital substitutes for traditional capital. In fact, the McKinsey Global Institute says that data capital explains most of the valuation premium enjoyed by digitized companies.

But, we’re getting ahead of ourselves. First, we need to acknowledge a few basics.

Because every activity in commercial, public, and private lives uses and produces information, no organization is insulated from the effects of digitization and datafication. Every company is thus subject to three laws of data capital.

  1. Data comes from activity.

Data is a record of what happened. But if you’re not party to the activity when it happens, your opportunity to capture that data is lost. Forever. So, digitize and “datafy” key activities your firm already conducts with customers, suppliers, and partners—before rivals edge you out. At the same time, look up and down your industry’s value chain for activities you’re not part of yet. Invent ways to insert yourself in a digital capacity, thereby increasing your share of data that the industry generates.

Contributed in The Big Analytics: Leader’s Collaborative Book Project Download your FREE copy at TheBigAnalytics
 

Source: Rise of Data Capital by Paul Sonderegger

Customer Churn or Retention? A Must Watch Customer Experience Tutorial

Care about Churn or Retention. Here is a brilliant watch for you.
Care about Churn or Retention. Here is a brilliant watch for you.

Customer retention and reduced churn is in high charts for most of businesses. So, how can companies work through their customer experience to achieve it. In the video below, TCELab touched some brilliant points that could help any company work through their strategy to build a Voice of Customer program.

The video is taken from one of our affiliate calls and it got a lot of positive response, so we decided to use it for educational purposes. If you don’t have 1 hour to spend, here is a trajectory for what is covered and when.

Happy scrolling. Don’t forget to share it with your network so that they could get things right as well.

0:00:07 What is Customer Experience Management (CEM)?
0:02:04 Why do CEO’s care?
0:04:15 Why CEM vendor should be excited?
0:07:15 What does CEM Program looks like?
0:07:45 Design of a CEM Program: CEM Program Components
0:11:20 Design of a CEM Program: Disparate Sources of Business Data
0:14:23 Design of a CEM Program: Data Linkage (connecting data to answer different question)
0:17:17 Design of a CEM Program: Integrating your business data (mapping organization silos with survey type)
0:20:58 Design of a CEM Program: Three ways to grow business… why just NPS is not enough?
0:25:40 TCELab product plug but some cross winds of CEM gold information
0:33:10 TCELab CLAAP Platform but some cross winds of CEM gold information
0:39:00 TCELab product execution process, time-lengths & other relevant information around it (information relevant to affiliate networks)
0:43:30 TCELab product lists (information relevant to affiliate networks)
0:52:40 TCELab case study: Kashoo + lot of good information for SAAS companies CEM program
Blog source

Source by v1shal

Remembering Steve Jobs

Remembering Steve Jobs. Click image to enlarge.

Everybody has an opinion about Steve Jobs. Please tell me how he has impacted your life in this brief survey.

I have read more about Steve Jobs after his passing than before. The outpouring of emotion and words of remembrance for him on the Web reflects the impact that he had on people who knew him and people who just used his products. I am part of the latter group.

Writing and Creating

I purchased my first computer, the Macintosh Plus, while I was in graduate school. I was amazed at the things I could do with this machine. I could write, play games (okay, mostly solitaire) and make art.  I wrote my first book, Measuring Customer Satisfaction and Loyalty, on that little magical box. My Mac allowed me to create everything for that book, from text and tables to fancy figures, helping me to describe complex ideas like sampling error. Sixteen years later, those exact figures still appear in the third edition of my book.

That book has greatly impacted my life and career. The process of writing the book helped me through a personal breakup. It helped me learn about the topic on which I was writing. It made me a better writer. The book itself even lead me into a career in helping companies improve the quality of the relationship they have with their customers. Without the computer that Steve Jobs created, I know my life would have been different than what it is today.

Defining Words

Writing and creating art are a big part of my life. To some degree, I have Steve Jobs to thank for that. I created the word cloud you see in this post, combining the words used to describe him after his passing with the image of him on the Apple.com site.  The words are based on many articles/quotes I found online today. Some words represented in this picture are from quotes from President Obama, Mark Zuckerberg, Guy Kawasaki, and Bill Gates, to name a few. The larger the font size, the more frequently that word was used to describe him. This picture represents how people define him, remember him.

I will leave you with words from Steve Jobs. I recently watched a recording of his 2005 commencement address to the graduating class of Stanford. While I enjoyed his entire address, one particular passage resonated with me.

“Remembering that I’ll be dead soon is the most important tool I’ve ever encountered to help me make the big choices in life. Because almost everything — all external expectations, all pride, all fear of embarrassment or failure – these things just fall away in the face of death, leaving only what is truly important. Remembering that you are going to die is the best way I know to avoid the trap of thinking you have something to lose. You are already naked. There is no reason not to follow your heart.”

Steve Jobs

Thanks for following your heart, Steve.

Source by bobehayes

Development of the Customer Sentiment Index: Lexical Differences

This is Part 2 of a series on the Development of the Customer Sentiment Index (see introduction, and Part 1). The CSI assesses the extent to which customers describe your company/brand with words that reflect positive or negative sentiment. This post covers the development of a judgment-based sentiment lexicon and compares it to empirically-based sentiment lexicons.

Last week, I created four sentiment lexicons for use in a new customer experience (CX) metric, the Customer Sentiment Index (CSI). The four sentiment lexicons were empirically derived using data from a variety of online review sites from IMDB, Goodreads, OpenTable and Amazon/Tripadvisor. This week, I develop a sentiment lexicon using a non-empirical approach.

Human Judgment Approach to Sentiment Classification

The judgment-based approach does not rely on data to derive the sentiment values; rather this method requires the use of subject matter experts to classify words into sentiment categories. This approach is time-consuming, requiring the subject matter experts to manually classify each of the thousands of words in our empirically-derived lexicons. To minimize the work required by the subject matter experts, an initial set of opinion words were generated using two studies.

In the first study, as part of an annual customer survey, a B2B technology company included an open-ended survey question, “Using one word, please describe COMPANY’S products/services.” From 1619 completed surveys, 894 customers provided an answer for the question. Many respondents used multiple words or the company’s name as their response, reducing the number of useful responses to be 689. Of these respondents, a total of 251 usable unique words were used by respondents.

Also, the customer survey included questions that required customers to provide ratings on measures of customer loyalty (e.g., overall satisfaction, likelihood to recommend, likelihood to buy different products, likelihood to renew) and satisfaction with the customer experience (e.g., product quality, sales process, ease of doing business, technical support).

In the second study, as part of a customer relationship survey, I solicited responses from customers of wireless service providers (B2C sample). The sample was obtained using Mechanical Turk by recruiting English-speaking participants to complete a short customer survey about their experience with their wireless service provider. In addition to the standard rated questions in the customer survey (e.g., customer loyalty, CX ratings), the following question was used to generate the one word opinion: “What one word best describes COMPANY? Please answer this question using one word.

From 469 completed surveys, 429 customers provided an answer for the question, Many respondents used multiple words or the company’s name as their response, reducing the number of useful responses to be 319. Of these respondents, a total of 85 usable unique words were used by respondents.

Sentiment Rating of Opinion Words

The list of customer-generated words for each sample was independently rated by the two experts. I was one of those experts. My good friend and colleague was the other expert. We both hold a PhD in industrial-organizational psychology and specialize in test development (him) and survey development (me). We have extensive graduate-level training on the topics of statistics and psychological measurement principles. Also, we have applied experience, helping companies gain value from psychological measurements. We each have over 20 years of experience in developing/validating tests and surveys.

For each list of words (N = 251 and N = 85), each expert was given the list of words and was instructed to “rate each word on a scale from 0 to 10; where 0 is most negative sentiment/opinion and 10 is most positive sentiment/opinion; and 5 is the midpoint.” After providing their first rating of each word, each of the two raters were then given the opportunity to adjust their initial ratings for each word. For this process, each rater was given the list of 251 words with their initial rating and were asked to make any adjustments to their initial ratings.

Results of Human Judgment Approach to Sentiment Classification

Table 1.  Descriptive Statistics and Correlations of Sentiment Values across Two Expert Raters
Table 1. Descriptive Statistics and Correlations of Sentiment Values across Two Expert Raters

Descriptive statistics of and correlations among the expert-derived sentiment values of customer-generated words appears in Table 1. As you can see, the two raters assign very similar sentiment ratings to words for both sets. Average ratings were similar. Also, the inter-rater agreement between the two raters for the 251 words was r = .87 and for the 85 words was .88.

After slight adjustments, the inter-rater agreement between the two raters improved to r = .90 for the list of 251 words and .92 for the list of 85 words. This high inter-rater agreement indicated that the raters were consistent in their interpretation of the two lists of words with respect to sentiment.

Figure 1. Distribution of
Figure 1. Distribution of Sentiment Values of Customer-Generated Words using Subject Matter Experts’ Sentiment Lexicon

Because of the high agreement between the raters and comparable means between raters, an overall sentiment score for each word was calculated as the average of the raters’ second/adjusted rating (See Table 1 or Figure 2 for descriptive statistics for this metric).

Comparing Empirically-Derived and Expert-Derived Sentiment

In all, I have created five lexicons; four lexicons are derived empirically from four data sources (i.e., OpenTable, Amazon/Tripadvisor, Goodreads and IMDB) and one lexicon is derived using subject matter experts’ sentiment classification.

Table 2. Descriptive Statistics and Correlations among Sentiment Values of Customer-Generated Words across Five Sentiment Lexicons (N = 251)
Table 2. Descriptive Statistics and Correlations among Sentiment Values of Customer-Generated Words across Five Sentiment Lexicons (N = 251)

I compared these five lexicons to better understand the similarity and differences of each lexicon. I applied the four empirically-derived lexicons to each list of customer-generated words. So, in all, for each list of words, I have 5 sentiment scores.

The descriptive statistics of and correlations among the five sentiment scores for the 251 customer-generated words appears in Table 2. Table 3 houses the information for the 85 customer-generated words.

Table 3. Descriptive Statistics and Correlations among Sentiment Values of Customer-Generated Words across Five Sentiment Lexicons (N = 85)
Table 3. Descriptive Statistics and Correlations of among Sentiment Values of Customer-Generated Words across 5 Sentiment Lexicons (N=85)

As you can see, there is high agreement among the empirically-derived lexicons (average correlation = .65 for the list of 251 words and .79 for the list of 85 words.

There are statistically significant mean differences across the empirically-derived lexicons; Amazon/Tripadvisor has the highest average sentiment value and Goodreads has the lowest. Lexicons from IMDB and OpenTable provide similar means. The expert judgment lexicon provides the lowest average sentiment ratings for each list of customer-generated words. The absolute sentiment value of a word is dependent on the sentiment lexicon you use. So, pick a lexicon and use it consistently; changing your lexicon could change your metric.

Looking at the the correlations of the expert-derived sentiments with each of the empirically-derived sentiment, we see that OpenTable lexicon had higher correlation with the experts compared to the Goodreads lexicon. The pattern of results make sense. The OpenTable sample is much more similar to the sample on which the experts provided their sentiment ratings. OpenTable represents a customer/supplier relationship regarding a service while the Goodreads’ sample represents a different type of relationship (customer/book quality).

Summary and Conclusions

These two studies demonstrated that subject matter experts are able to scale words along a sentiment scale. There was high agreement among the experts in their classification.

Additionally, these judgment-derived lexicons were very similar to four empirically derived lexicons. Lexicons based on subject matter experts’ sentiment classification/scaling of words are highly correlated to empirically-derived lexicons. It appears that each of the five sentiment lexicons tells you roughly the same thing as the other lexicons.

The empirically-derived lexicons are less comprehensive than the subject matter experts’ lexicons regarding customer-generated words. By design, the subject matter experts classified all words that were generated by customers; some of the words that were used by the customers do not appear in the empirically-derived lexicons. For example, the OpenTable lexicon only represents 65% (164/251) of the customer-generated words for Study 1 and 71% (60/85) of the customer-generated words for Study 2. Using empirically-derived lexicons for the purposes of calculating the Customer Sentiment Index could be augmented using lexicons that are based on subject matter experts’ classification/scaling of words.

In the next post, I will continue presenting information about the validating the Customer Sentiment Index (CSI). So far, the analysis shows that the sentiment scores of the CSI are reliable (we get similar results using different lexicons). We now need to understand what the CSI is measuring. I will show this by examining the correlation of the CSI with other commonly used customer metrics, including likelihood to recommend (e.g., NPS), overall satisfaction and CX ratings of important customer touch points (e.g., product quality, customer service). Examining correlations of this nature will also shed light on the usefulness of the CSI in a business setting.

Source: Development of the Customer Sentiment Index: Lexical Differences

Aligning Sales Talent to Drive YOUR Business Goals

5steps_analytics
A confluence of new capabilities is creating an innovative, more precise approach to performance improvement. New approaches include advanced analytics, refined sales competency and behavioral models, adaptive learning, and multiple forms of technology enablement. In a prior post (The Myth of the Ideal Sales Profile) we explored an emerging new paradigm that is disrupting traditional thinking with respect to best practices: the world according to YOU.

However, with only 17% of sales organizations leveraging sales talent analytics (TDWI Research), it seems that most CSO’s and their HR business partners are gambling — using intuition as the basis for making substantial investments in sales development initiatives. If the gamble doesn’t pay off, then the investment is wasted.

Is your sales talent aligned to your company’s strategy of increasing revenue? According to the Conference Board, 73% of CEO’s say no. This lack of alignment is the main reason why 86% of CSO’s expect to miss their 2015 revenue targets (CSO Insights). The ability to properly align your sales talent to your company’s business goals is the difference between being in the 86% or the 14%.

What Happens When You Assume?

Historically, sales and Human Resource leaders based sales talent alignment decisions — both development of the existing team and acquisition of future talent — on assumptions and somewhat subjective data.

Common practices include:

  • Polling the field to determine the focus for sales training
  • Hiring sales talent based largely on the subjective opinion of interviewers
  • Defining your “ideal seller profile” based on the guidance of industry pundits
  • Making a hiring decision based on the fact that the candidate made Achiever’s Club 3 of the last 5 years at their previous company
  • Deploying a sales training program based on what a colleague did at their last company

Aligning sales talent based on any of the above is likely to land your company in the 86% because these approaches fail far more times than they succeed. They fail to consider the many cause-and-effect elements that impact success in your company, in your markets, for your products, and for your customers. As proof of their low success rate, a groundbreaking study by ES Research found that 90% of sales training [development initiatives] had no lasting impact after 120 days. And the news isn’t any better when it comes to sales talent acquisition; Accenture reports that the average ramp-up time for new reps is 7-12 months.

Defining YOUR Ideal Seller Profile(s)

So how does your organization begin to apply the “new way” (see illustration below) as an approach to optimize sales performance? It begins with zeroing in on the capabilities of your salespeople that align most closely to the specific goals of your business. In essence, it means understanding what the YOUR ideal seller profiles are.

Applying the new way begins with specific business goals of your company. What if market share growth was the preeminent strategic goal for your organization? Would it not be extremely valuable to understand which sales competencies were most likely to impact that aspect of your corporate strategy? The obvious answer is yes; and the obvious question is how align and optimize sales to drive increased market share?

How does a CSO identify where to target development in order to have the biggest impact on business results?

By using facts as the basis for these substantial investments. Obtaining facts requires several essential ingredients. The first is a rigorous, comprehensive model for sales competencies; that is, a well-defined model of “what good looks like” for a broad range of sales competencies. This model can be adapted for a specific selling organization, and provides the baseline sales-specific assessments (personality, knowledge, cognitive ability, behavior, etc.).

Then, by applying advanced analytics, including Structural Equations Modeling (SEM) – we can begin to identify cause-effect relationships between specific competencies and the metrics and goals of YOUR organization. With SEM, CSO’s can statistically identify the knowledge and behavior that set top-performers apart from the rest of their team. With this valuable insight, the organization can now align both talent development and acquisition to the company’s most important business goals.

Sales Talent Analytics Provide Proof

Times have changed. The days of aligning sales talent based on gut feel, assumptions or generally accepted best-practices are over. By leveraging sales talent analytics, today’s sales leader can apply a proven 3-step approach to stop gambling and get the facts to statistically pinpoint where to focus development of the sales team, quantifiably measure the business impact / ROI of that development, and improve the quality of new hires. But buyer beware; not all analytical approaches are equal. The vast majority leverage correlation-based analytics which can lead to erroneous conclusions.

By the way we’re not eschewing well designed research that provides insights into broader application of best practices. Aberdeen Group found that best-in-class sales teams that leverage data and analytics increased team quota attainment 12.3% YOY (vs. 1% for an average company) and increased average deal size 8% YOY (vs. 0.8%)

It’s time to define the ideal seller profile for YOUR company. In our next post in this series, we answer the question – how do we capitalize on that understanding to drive the highest impact on our business goals?

Source: Aligning Sales Talent to Drive YOUR Business Goals by analyticsweekpick

Predicting the Future and Shaping Strategy with HR Analytics

Technological innovation has given Human Resources the ability to predict the future — and has moved HR into the boardroom. But it’s up to data-savvy HR professionals to make that move permanent.

“Why should I, as a managing director or a CEO, give as much credibility to HR as I do to finance, operations, procurement, sales and marketing?”Personnel Today asked last week. “[Because] those functions are data led; they can provide me with numeric business cases, forecasts and scenarios; [and] I know where I stand with them.”

Like other departments, HR uses data analytics to closely examine what’s really happening within the organization. It is humanizing big data with “people analytics,” and last month Bank of America tasked its HR lead with the institution’s critical post-financial-crisis stress testing.

HR’s Seat at the Table

“This is an absolutely exciting time to be in human resources,” SAP’s David Swanson said Monday in Las Vegas, ahead of SuccessConnect 2015. “I’ve been in HR for the better part of 20 years, and I really feel that for the first time HR is front and center at the executive table.”

CEOs want to learn from past hiring successes and failures, a job that’s perfect for analytics-enabled HR departments, according to Swanson. The end goal is to use analytics for predicting the future, knowing whom to hire — and which new hires will most quickly become productive.

“We have the opportunity in HR to lead the conversation around strategy — and development of strategy, Swanson said. “Unfortunately many of us in HR, because we’re not comfortable using data, can only talk from a position of intuition or gut feel.”

Data-Driven Decisions

Swanson leads a global team of product evangelists. He and his teammates explain to customers and prospects how his company runs cloud-based human capital management software.

“One of the beauties of my role is that I get to go out and talk to companies about how we use theSuccessFactors solutions … to really take a look at what’s making a difference in the workplace,” Swanson said. “We can use it to predict future success — versus just saying, ‘Well, I think this might happen.’”

That data-driven confidence will help HR professionals identify behaviors and interview styles that attract better employees, as well as qualities that make effective workers — and lead to faster promotions.

HR’s Turning Point

The promise of big data and analytics brought HR to the table, but that promise alone won’t keep them there. HR professionals must learn to embrace the technology — and wield it effectively.

“We have an opportunity to help make the strategy, not just execute the strategy,” Swanson said. “The way that we can do that is using data and analytics to be able to predict success.”

This story originally appeared on SAP Business Trends. Follow Derek on Twitter: @DKlobucher

 

Source: Predicting the Future and Shaping Strategy with HR Analytics

The Best Likelihood to Recommend Metric: Mean Score or Net Promoter Score?

A successful customer experience management (CEM) program requires the collection, synthesis, analysis and dissemination of customer metrics.  Customer metrics are numerical scores or indices that summarize customer feedback results for a given customer group or segment. Customer metrics are typically calculated using customer ratings of survey questions. I recently wrote about how you can evaluate the quality of your customer metrics and listed four questions you need to ask, including how the customer metric is calculated.  There needs to be a clear, logical method of how the metric is calculated, including all items (if there are multiple items) and how they are combined.

Calculating Likelihood to Recommend Customer Metric

Let’s say that we conducted a survey asking customers the following question: “How likely are you recommend COMPANY ABC to your friends/colleagues?” Using a rating scale from 0 (not at all likely) to 10 (extremely likely), customers are asked to provide their loyalty rating.  How should you calculate a metric to summarize the responses? What approach gives you the most information about the responses?

There are different ways to summarize these responses to arrive at a customer metric. Four common ways to calculate a metric are:

  1. Mean Score:  This is the arithmetic average of the set of responses. The mean is calculated by summing all responses and dividing by the number of responses. Possible scores can range from 0 to 10.
  2. Top Box Score: The top box score represents the percentage of respondents who gave the best responses (either a 9 and 10 on a 0-10 scale). Possible percentage scores can range from 0 to 100.
  3. Bottom Box Score: The bottom box score represents the percentage of respondents who gave the worst responses (0 through 6 on a 0-10 scale). Possible percentage scores can range from 0 to 100.
  4. Net Score: The net score represents the difference between the Top Box Score and the Bottom Box Score. Net scores can range from -100 to 100. While the net score was made popular by the Net Promoter Score camp, others have used a net score to calculate a metric (please see Net Value Score.) While the details might be different, net scores take the same general approach in their calculations (percent of good responses – percent of bad responses). For the remainder, I will focus on the Net Promoter Score methodology.

Comparing the Customer Metrics

To study these four different ways to summarize the “Likelihood to recommend” question, I wanted to examine how these metrics varied over different companies/brands. Toward that end, I re-used some prior research data by combining responses across three data sets. Each data set is from an independent study about consumer attitudes toward either their PC Manufacturer or Wireless Service Provider. Here are the specifics for each study:

  1. PC manufacturer: Survey of 1058 general US consumers in Aug 2007 about their PC manufacturer. All respondents for this study were interviewed to ensure they met the correct profiling criteria, and were rewarded with an incentive for filling out the survey. Respondents were ages 18 and older. GMI (Global Market Insite, Inc., www.gmi-mr.com) provided the respondent panels and the online data collection methodology.
  2. Wireless service provider: Survey of 994 US general consumers in June 2007 about their wireless provider. All respondents were from a panel of General Consumers in the United States ages 18 and older. The potential respondents were selected from a general panel which is recruited in a double opt-in process; all respondents were interviewed to ensure they meet correct profiling criteria. Respondents were given an incentive on a per-survey basis. GMI (Global Market Insite, Inc., www.gmi-mr.com) provided the respondent panels and the online data collection methodology.
  3. Wireless service providers: Survey of 5686 worldwide consumers from Spring 2010 about their wireless provider. All respondents for this study were rewarded with an incentive for filling out the survey. Respondents were ages 18 or older. Mob4Hire (www.mob4hire.com)  provided the respondent panels and the online data collection methodology.
Table 1. Correlations among different summary metrics of the same question (likelihood to recommend).

From these three studies across nearly 8000 respondents, I was able to calculate the four customer metrics for 48 different brands/companies.  Companies that had 30 or more responses were used for the analyses. Of the 48 different brands, most were from the Wireless Service provider industry (N = 41). The remaining seven were from the PC industry. Each of these 48 brands had four different metrics calculated on the “Recommend” question. The descriptive statistics of the four metrics and the correlations across the 48 brands appear in Table 1.

Figure 1. Scatterplot of two ways to summarize the “Likelihood to Recommend” question (Mean Score and Net Score (NPS)) for the Recommend Question

As you can see in Table 1, the four different customer metrics are highly related to each other. The correlations among the metrics vary from .85 to .97 (the negative correlations with Bottom 7 Box indicate that the bottom box score is a measure of badness; higher scores indicate more negative customer responses).

These extremely high correlations tell us that these four metrics tell us roughly the same thing about the 48 brands. That is, brands with high Mean Scores are those that are getting high Net Scores, high Top Box Scores and Low Bottom Box scores. These are overly redundant metrics.

When you plot the relationship between the Mean Scores and Net Scores, you can clearly see the close relationship between the two metrics (see Figure 1.). In fact, the relationship between the Mean Score and NPS is so high, that you can, with great accuracy, predict your NPS score (y) from your Mean Score (x) using the regression equation in Figure 1.

Mean Score vs Net Promoter Score vs Top/Bottom Box

The “Likelihood to Recommend” question is a commonly used question in customer surveys. I use it as part of a larger set of customer loyalty questions. What is the most efficient way to summarize the results? Based on the analyses, here are some conclusions regarding the different methods.

Figure 2. Scatterplot of two ways to summarize the “Likelihood to Recommend” question (Net Score (NPS) and Mean Score) for the Recommend Question

1. NPS does not provide any additional insight beyond what we know by the Mean Score. Recall that the correlation between the Mean Score and the NPS across the 48 brands was .97!  Both metrics are telling you the same thing about how the brands are ranked relative to each other. The mean score uses all the data to calculate the metric while the NPS ignores specific customer segments. So, what is the value of the NPS?

2. NPS score is ambiguous/difficult to interpret. An NPS value of 15 could be derived from a different combination of promoters and detractors. For example, one company could arrive at an NPS of 15 with 40% promoters and 25% detractors while another company could arrive at the same NPS score of 15 with 20% promoters and 5% detractors. Are these two companies with the same NPS score really the same?

Also, more importantly, the ambiguity of the NPS lies in the lack of a scale of measurement. While the calculation of the NPS is fairly straightforward (e.g., take the difference of two values to arrive at a score), the score itself becomes meaningless because the difference transformation creates an entirely new scale that ranges from -100% to 100%. So, what does a score of zero (0) indicate? Is that a bad score? Does that mean a majority of your customers would not recommend you?

Understanding what an NPS of zero (0) indicates can only occur when you map the NPS value back to the original scale of measurement (0 to 10 likelihood scale). A scatterplot (and corresponding regression equation) of NPS and Mean Score is presented in Figure 2. If we plug zero (0) into the equation, your expected Mean Score would be 7.1, indicating that a majority of your customers would recommend you (mean score is above the midpoint of the rating scale). If you know your NPS score, you can estimate your mean score using this formula. Even though it is based on a narrowly defined sample, I think the regression model is more a function of the constraints of the calculations than a characteristic of the sample. I think it will provide some good approximation. If you try it, let me know how how accurate it is.

3. Top/Bottom Box provides information about clearly defined customer segments.  Segmenting customers based on their survey responses makes good measurement and business sense. Using top box and bottom box methods helps you create customer segments (e.g., disloyal, loyal, very loyal) that have meaningful differences across segments in driving business growth. So, rather than creating a net score from the customer segments (see number 2), you are better off simply reporting the absolute percentages of the customer segments.

Summary

RAPID Loyalty Results
Figure 3. Reporting loyalty results using mean scores and top/middle/bottom box scores (customer segments).

Communicating survey results requires the use of summary metrics. Summary metrics are used to track progress and benchmark against loyalty leaders. There are a variety of ways to calculate summary metrics (e.g., mean score, top box, bottom box, net score), yet the results of my analyses show that these metrics are telling you the same thing. All metrics were highly correlated with each other.

There are clear limitations to the NPS metric. The NPS does not provide any additional insight about customer loyalty beyond what the mean score tells us.  The NPS is ambiguous and difficult to interpret. Without a clear unit of measurement for the difference score, the meaning of an NPS score (say, 24) is unclear. The components of the NPS, however, are useful to know.

I typically report survey results using mean scores and top/middle/bottom box results. I find that combining these methods help paint a comprehensive picture of customer loyalty. Figure 3 includes a graph that summarizes the results of responses across three different types of customer loyalty. I never report Net Scores as they do not provide any additional insight beyond the mean score or the customer segment scores.

Source

Energy companies have more data than they know what to do with

Energy enterprises (specifically, oil and natural gas companies) are witnessing a monumental shift in the global economy. North America is ramping up production, which is raising a number of health, safety and environmental concerns among United States and Canadian citizens alike.

It’s easy to view big data analytics as a cure-all for the challenges faced by the energy industry, but using the technology doesn’t automatically solve those problems. As I’ve repeatedly said, data visualization merely provides finished intelligence to its users – people are responsible for finding out how to apply this newfound knowledge to their operations.

“The ultimate goal of the modern energy company is to optimize production efficiency.”

What’s the end? Affordability
If energy companies can find efficient methods of extracting and refining larger amounts of fossil fuels without increasing the amount of resources they use, economics would suggest the price of the oil and natural gas would decrease. Ultimately, affordability is dictated by supply and demand, but I digress.

From the perspectives of McKinsey & Company’s Stefano Martinotti, Jim Nolten, and Jens Arne Steinsbø, the ultimate goal of the modern energy company is to optimize production efficiency without sacrificing residential health, worker safety and the environment. Based on McKinsey’s research, which specifically scrutinized oil drilling operations in the North Sea (the water body located between Great Britain, Scandinavia and the Netherlands), the authors discovered that oil companies with high production efficiencies did not incur high costs. Instead, these enterprises made systematic changes to existing operations by:

  • Eliminating equipment malfunctions
  • Choosing assets based on quality and historic performance data
  • Aligning personnel and properties with the market to plan and implement shutdowns

Analytics as an enabler of automation
The McKinsey authors maintained that automating operations was a key component to further improving existing oil drilling operations. This is where you get into the analytics applications and use cases associated with network-connected devices. Many of the North Sea’s offshore oil extraction facilities are equipped with comprehensive data infrastructures composed of network assets, sensors and software.

Data flow is a huge part of the automation process. Data flow is a huge part of the automation process.

The authors noted such platforms can possess as many as 40,000 data tags, not all of which are connected or used. The argument stands that if unused sensors and other technologies can be integrated into central operations to create a smart drilling facility, such a property could save between $220 million to $260 million annually. The possibilities and benefits go beyond the bottom line:

  • Automation could extend the lifecycle of equipment that is slowly becoming antiquated
  • New uses for under-allocated assets could be recognized
  • Equipment assessments could be conducted by applications receiving data from radio-frequency identification tags, enabling predictive maintenance

“A smart drilling facility could save between $220 million to $260 million annually.”

Resolving industry challenges
From a holistic standpoint, the oil and natural gas sector will use data analytics to effectively handle a number of industry challenges, some of which are opposed by internal or external forces.

One of the obvious challenges is the low tolerance people have for health, safety and environmental accidents. Think of how the BP oil spill of 2010 impacted consumer sentiments toward the energy industry. Technologies and processes associated with data analytics can resolve this issue by monitoring asset integrity, accurately anticipating when failures are about to occur and regularly scrutinizing how operations are affecting certain areas.

Generally, use cases expand as data scientists, operators and other professionals flex their creative muscles. There’s no telling how analytics will be applied in the near future.

Originally posted via “Energy companies have more data than they know what to do with”

Source