Why Using the ‘Cloud’ Can Undermine Data Protections

By Jack Nicas

While the increasing use of encryption helps smartphone users protect their data, another sometime related technology, cloud computing, can undermine those protections.

The reason: encryption can keep certain smartphone data outside the reach of law enforcement. But once the data is uploaded to companies’ computers connected to the Internet–referred to as “the cloud”–it may be available to authorities with court orders.
“The safest place to keep your data is on a device that you have next to you,” said Marc Rotenberg, head of the Electronic Privacy Information Center. “You take a bit of a risk when you back up your device. Once you do that it’s on another server.”

Encryption and cloud computing “are two competing trends,” Mr. Rotenberg said. “The movement to the cloud has created new privacy risks for users and businesses. Encryption does offer the possibility of restoring those safeguards, but it has to be very strong and it has to be under the control of the user.”

Apple is fighting a government request that it help the Federal Bureau of Investigation unlock the iPhone of Syed Rizwan Farook, the shooter in the December terrorist attack in San Bernardino, Calif.

The FBI believes the phone could contain photos, videos and records of text messages that Mr. Farook generated in the final weeks of his life.

The data produced before then? Apple already provided it to investigators, under a court search warrant. Mr. Farook last backed up his phone to Apple’s cloud service, iCloud, on Oct. 19.

Encryption scrambles data to make it unreadable until accessed with the help of a unique key. The most recent iPhones and Android phones come encrypted by default, with a user’s passcode activating the unique encryption key stored on the device itself. That means a user’s contacts, photos, videos, calendars, notes and, in some cases, text messages are protected from anyone who doesn’t have the phone’s passcode. The list includes hackers, law enforcement and even the companies that make the phones’ software: Apple and Google.

However, Apple and Google software prompt users to back up their devices on the cloud. Doing so puts that data on the companies’ servers, where it is more accessible to law enforcement with court orders.

Apple says it encrypts data stored on its servers, though it holds the encryption key. The exception is so-called iCloud Keychain data that stores users’ passwords and credit-card information; Apple says it can’t access or read that data.

Officials appear to be asking for user data more often. Google said that it received nearly 35,000 government requests for data in 2014 and that it complies with the requests in about 65% of cases. Apple’s data doesn’t allow for a similar comparison since the company reported the number of requests from U.S. authorities in ranges in 2013.

Whether they back up their smartphones to the cloud, most users generate an enormous amount of data that is stored outside their devices, and thus more accessible to law enforcement.

“Your phone is an incredibly intricate surveillance device. It knows everyone you talk to, where you are, where you live and where you work,” said Bruce Schneier, chief technology officer at cybersecurity firm Resilient Systems Inc. “If you were required to carry one by law, you would rebel.”

Google, Yahoo Inc. and others store users’ emails on their servers. Telecom companies keep records of calls and some standard text messages.
Facebook
Inc. and Twitter Inc. store users’ posts, tweets and connections.

Even Snapchat Inc., the messaging service known for photo and video messages that quickly disappear, stores some messages. The company says in its privacy policy that “in many cases” it automatically deletes messages after they are viewed or expire. But it also says that “we may also retain certain information in backup for a limited period or as required by law” and that law enforcement sometimes requires it “to suspend our ordinary server-deletion practices for specific information.”

Snapchat didn’t respond to a request for comment.

Write to Jack Nicas at jack.nicas@wsj.com
(END) Dow Jones Newswires
02-18-161938ET
Copyright (c) 2016 Dow Jones & Company, Inc.

Originally Posted at: Why Using the ‘Cloud’ Can Undermine Data Protections

Jan 31, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Accuracy  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Startup Movement Vs Momentum, a Classic Dilemma by v1shal

>> Accelerating Discovery with a Unified Analytics Platform for Genomics by analyticsweek

>> Black Friday and Cyber Monday – Analyzing Your Holiday Numbers by analyticsweek

Wanna write? Click Here

[ NEWS BYTES]

>>
 Building a data security strategy – why the industry needs to work together – SC Magazine Under  Data Security

>>
 Global Prescriptive Analytics Market 2018 Growth, Production, Suppliers, Consumption, Region Forecast to 2023 – The West Chronicle (press release) (blog) Under  Prescriptive Analytics

>>
 New Master’s in Data Science Prepares Students for Fastest Growing Field in US – University of New Haven News Under  Data Science

More NEWS ? Click Here

[ FEATURED COURSE]

Artificial Intelligence

image

This course includes interactive demonstrations which are intended to stimulate interest and to help students gain intuition about how artificial intelligence methods work under a variety of circumstances…. more

[ FEATURED READ]

The Black Swan: The Impact of the Highly Improbable

image

A black swan is an event, positive or negative, that is deemed improbable yet causes massive consequences. In this groundbreaking and prophetic book, Taleb shows in a playful way that Black Swan events explain almost eve… more

[ TIPS & TRICKS OF THE WEEK]

Data aids, not replace judgement
Data is a tool and means to help build a consensus to facilitate human decision-making but not replace it. Analysis converts data into information, information via context leads to insight. Insights lead to decision making which ultimately leads to outcomes that brings value. So, data is just the start, context and intuition plays a role.

[ DATA SCIENCE Q&A]

Q:Explain selection bias (with regard to a dataset, not variable selection). Why is it important? How can data management procedures such as missing data handling make it worse?
A: * Selection of individuals, groups or data for analysis in such a way that proper randomization is not achieved
Types:
– Sampling bias: systematic error due to a non-random sample of a population causing some members to be less likely to be included than others
– Time interval: a trial may terminated early at an extreme value (ethical reasons), but the extreme value is likely to be reached by the variable with the largest variance, even if all the variables have similar means
– Data: “cherry picking”, when specific subsets of the data are chosen to support a conclusion (citing examples of plane crashes as evidence of airline flight being unsafe, while the far more common example of flights that complete safely)
– Studies: performing experiments and reporting only the most favorable results
– Can lead to unaccurate or even erroneous conclusions
– Statistical methods can generally not overcome it

Why data handling make it worse?
– Example: individuals who know or suspect that they are HIV positive are less likely to participate in HIV surveys
– Missing data handling will increase this effect as it’s based on most HIV negative
-Prevalence estimates will be unaccurate

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with Dr. Nipa Basu, @DnBUS

 #BigData @AnalyticsWeek #FutureOfData #Podcast with Dr. Nipa Basu, @DnBUS

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Data are becoming the new raw material of business. – Craig Mundie

[ PODCAST OF THE WEEK]

@DrewConway on fabric of an IOT Startup #FutureOfData #Podcast

 @DrewConway on fabric of an IOT Startup #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

The data volumes are exploding, more data has been created in the past two years than in the entire previous history of the human race.

Sourced from: Analytics.CLUB #WEB Newsletter

What is Customer Loyalty? Part 2: A Customer Loyalty Measurement Framework

True_Test_of_Loyalty_Article_Cover
Read about the development of the RAPID Loyalty Approach. Click image to download the article.

Last week, I reviewed several definitions of customer loyalty (see What is Customer Loyalty? Part 1) that are being used in business today. It appears that definitions fall into two broad categories of loyalty: emotional and behavioral. Emotional loyalty is about how customers generally feel about a company/brand (e.g., when somebody loves, trusts, willing to forgive the company/brand). Behavioral loyalty, on the other hand, is about the actions customers engage in when dealing with the brand (e.g., when somebody recommends, continues to buy, buys different products from the company/brand).  Generally speaking, then, we might think of customer loyalty in the following way:

Customer loyalty is the degree to which customers experience positive feelings for and engage in positive behaviors toward a company/brand.

This week, I will propose a customer loyalty measurement framework to help you understand how to conceptualize and measure customer loyalty. After all, to be of practical value to business, customer loyalty needs to be operationalized (e.g., bringing the concept of loyalty into the measurable world). Once created, these metrics can be used by businesses in a variety of ways to improve marketing, sales, human resources, service and support processes, to name a few. First, I will present two approaches to measuring customer loyalty.

Measurement Approaches

There are two general approaches to measuring customer loyalty: 1) objective approach and 2) subjective (self-reported) approach.

  1. Objective measurement approach include system-captured metrics that involve hard numbers regarding customer behaviors that are beneficial to the company.  Data can be obtained from historical records and other objective sources, including purchase records (captured in a CRM system) and other online behavior. Examples of objective loyalty data include computer generated records of “time spent on the Web site,” “number of products/services purchased” and “whether a customer renewed their service contract.”
  2. Subjective measurement approach involves “soft” numbers regarding customer loyalty. Subjective loyalty metrics include customers’ self-reports of their feelings about the company and behavior toward the company. Examples of subjective loyalty data include customers’ ratings on standardized survey questions like, “How likely are you to recommend <Company> to your friends/colleagues?”, “How likely are you to continue using <Company>?” and “Overall, how satisfied are you with <Company>?”
Figure 1. Companies with higher levels of customer loyalty experience accelerated business growth.
Figure 1. Companies with higher levels of customer loyalty experience accelerated business growth.

While I present two distinct customer loyalty measurement approaches, there are likely gradients of the subjective measurement approach. On one end of the subjective continuum, ratings are more perceptually based (what is typically used today) and, on the other end of the subjective continuum, ratings are more behaviorally based that more closely approximate the objective measurement approach. The objective/subjective dichotomy, however, provides a good framework for discussing measurement approaches.

Before continuing on the measurement of customer loyalty, it is useful to first put customer loyalty in context of how it impacts your business. Generally speaking, companies who have higher levels of customer loyalty also experience faster business growth (See Figure 1).  While I argue elsewhere that the customer loyalty metrics you use depend on your business needs and the types of behaviors in which you want your customers to engage, understanding how customer loyalty impacts business growth will help you determine the types of loyalty metrics you need.

Three Ways to Grow a Business: Retention, Advocacy, Purchasing

Figure 2. Business models illustrate that there are three ways to grow your business. Top Model is from Reichheld,1996; Bottom model is from Gupta, et al. 2006. Click image to enlarge.

Let us take a look at two business models that incorporate customer loyalty as a key element of business growth and company value (See Figure 2). The top graph is from Fred Reichheld and illustrates the components that drive company profit. Of the components that contribute to company profits, three of them reflect customer loyalty: retention (measured in years), advocacy (measured as referrals) and expanding purchasing (measured through increased purchases).

Similarly to Reichheld’s model, Gupta’s Customer Lifetime Value model focuses on customer loyalty as a mediator between what a company does (e.g., business programs) and the company value (see graph on the bottom of Figure 2). Again, customer loyalty plays a central role in understanding how to increase firm value. Improving 1) retention behaviors, 2) advocacy behaviors and 3) purchasing behaviors will increase company value.

Customer Loyalty Measurement Framework: Operationalizing Customer Loyalty

Our loyalty metrics need to reflect those attitudes and behaviors that will have a positive impact on company profit/value. Knowing that customer loyalty impacts company profits/value in three different ways, we can now begin to operationalize our customer loyalty measurement strategy. Whether we use an objective measurement approach or a subjective measurement approach, our customer loyalty metrics need to reflect retention loyalty, advocacy loyalty and purchasing loyalty.  Here are a few objective customer loyalty metrics businesses can use:

  • Churn rates
  • Service contract renewal rates
  • Number/Percent of new customers
  • Usage metrics – frequency of use/visits, page views
  • Sales records – number of products purchased

Here are a few subjective customer loyalty metrics businesses can use:

  • likelihood to renew service
  • likelihood to leave
  • overall satisfaction
  • likelihood to recommend
  • likelihood to buy different/additional products
  • likelihood to expand usage
Customer Loyalty Measurement Framework
Figure 3. Customer Loyalty Measurement Framework: You can measure emotional (e.g., advocacy) and behavioral loyalty (e.g., retention and purchasing) using different measurement approaches (e.g., subjective and objective).

Figure 3 illustrates how these loyalty metrics fit into the larger customer loyalty measurement framework of loyalty types and measurement approaches. Each of the customer loyalty metrics above falls into one of the four quadrants of Figure 3.

It is important to point out that the subjective measurement approach is not synonymous with emotional loyalty. Survey questions can be used to measure both emotional loyalty (e.g., overall satisfaction) as well as behavioral loyalty (e.g., likelihood to leave, likelihood to buy different products). In my prior research on measuring customer loyalty, I found that you can reliably and validly measure the different types of loyalty using survey questions.

Looking at the lower left quadrant of Figure 3, you see that there are different ways to measure advocacy loyalty. While you might question why “likelihood to recommend” and “likelihood to buy same product” are measuring advocacy loyalty, research shows that they are more closely associated with emotional rather than behavioral loyalty. Specifically, these questions are highly related to “overall satisfaction.” Also, factor analysis of several loyalty questions show that these three subjective metrics (sat, recommend, buy) loaded on the same factor. This pattern of results suggests that these questions really are simply measures of the customers’ emotional attachment to the company/brand.

I have include the metrics of “level of trust,” “willingness to consider” and “willingness to forgive” as emotional loyalty metrics due to their strong emotional nature. Based on what I know about how customers rate survey questions. I suspect these questions would essentially provide the same information as the other questions in the quadrant. That, however, is an empirical question that needs to be tested.

Subjective vs. Objective Measurement Approach

While companies have both objective and subjective measurement approaches at their disposal, surveys remain a popular approach to measuring customer loyalty. In fact, surveys remain the cornerstone of most customer experience management programs.

Companies use customer surveys to measure customer loyalty rather than solely relying on objective metrics of customer loyalty because: 1) Customer surveys allow companies to quickly and easily gauge levels of customer loyalty, 2) Customer surveys can provide rich information about the customer experience that can be used to more easily change organizational business process and 3) Customer surveys provide a forward look into customer loyalty.

RAPID Loyalty Approach

I have conducted research on the subjective approach to measuring customer loyalty over the past few years. Based on the results of this research, I developed the RAPID Loyalty approach that supports the three ways businesses can grow their business: Retention, Advocacy and Purchasing loyalty.  The RAPID loyalty approach includes three metrics, each assessing one of three components of customer loyalty:

  • Retention Loyalty Index (RLI): Degree to which customers will remain as customers or not leave to competitors; contains loyalty questions like: renew service contract, leave to competitor (reverse coded).
  • Advocacy Loyalty Index (ALI): Degree to which customers feel positively toward/will advocate your product/service/brand; contains loyalty questions like: overall satisfaction, recommend, buy again.
  • Purchasing Loyalty Index (PLI): Degree to which customers will increase their purchasing behavior; contains loyalty questions like: buy additional products, expand use of product throughout company.

Each of the RAPID loyalty indices has excellent measurement properties; that is, each index is a reliable, valid and useful indicator of customer loyalty and is predictive of future business growth. Specifically, in a nationwide study asking over 1000 customers (See Figure 4) about their current network operator, each loyalty index was predictive of different business growth metrics across several US network operators (Alltel, AT&T, Sprint/Nextel, T-Mobile, and Verizon):

Different customer loyalty metrics predict different types of business growth.
Figure 4. The RAPID Loyalty indices (ALI, PLI and RLI), each predict different types of business growth.
  • RLI was the best predictor of future churn rate
  • ALI was a good predictor of new customer growth
  • PLI was the best predictor of Average Revenue per User (ARPU) growth

The bottom line is that there are three general ways to grow your business: keep customers coming back (retention), recommending you to their friends/family (advocacy) and expanding their relationship with you by buying different products/services (purchasing). To increase company profits/firm value, it is imperative that you measure and optimize each type of customer loyalty. Falling short on one type of customer loyalty will have a deleterious effect on company profit/firm value.

State of Customer Loyalty Measurement

In an informal online poll taken during a talk, Asking the Right CX Questions (part of CustomerThink’s Customer Experience Summit 2011), I asked participants about their CEM program loyalty metrics. While a little over 75% of the respondents said their company uses advocacy loyalty measures, only a third of the respondents indicated that their company uses purchasing loyalty measures (33%) and retention loyalty measures (30%).

Benefits of Measuring Different Types of Customer Loyalty

It appears that most companies’ customer loyalty measurement approach is insufficient. Companies who measure and understand different types of customer loyalty and how they are impacted by the customer experience have several advantages over companies who narrowly measure customer loyalty:

  • Target solutions to optimize different types of customer loyalty. For example, including retention loyalty questions (e.g., “likelihood to quit”) and a purchasing loyalty questions (e.g., “likelihood to buy different”) can help companies understand why customers are leaving and identify ways to increase customers’ purchasing behavior, respectively.
  • Identify key performance indicators (KPIs) for each type of customer loyalty. Identification of different KPIs (key drivers of customer loyalty) helps companies ensure they are monitoring all important customer experience areas. Identifying and monitoring all KPIs helps ensure the entire company is focused on matters that are important to the customer and his/her loyalty.
  • Obtain more accurate estimates of the Return on Investment (ROI) of improvement initiatives. Because ROI is the ratio of additional revenue (from increased loyalty) to cost (of initiative), the ROI of a specific improvement opportunity will depend on how the company measures customer loyalty. If only advocacy loyalty is measured, the estimate of ROI is based on revenue from new customer growth. When companies measure advocacy, purchasing and retention loyalty, the estimate of ROI is based on revenue from new and existing customer growth.

The primary goal of CEM is to improve customer loyalty. Companies that define and measure customer loyalty narrowly are missing out on opportunities to fully understand the impact that their CEM program has on the company’s bottom line. Companies need to ensure they are comprehensively measuring all facets of customer loyalty. A poor customer loyalty measurement approach can lead to sub-optimal business decisions, missed opportunities for business growth and an incomplete picture of the health of the customer relationship.

Summary

Customer loyalty is a very fuzzy concept. With various definitions of customer loyalty floating around in the literature, it is difficult to know what one is talking about when one uses the term, “customer loyalty.” I tried to clarify the meaning of customer loyalty by consolidating different customer loyalty definitions into two general customer loyalty types: emotional loyalty and behavioral loyalty.

Additionally, I discussed two measurement approaches that companies can utilize to assess customer loyalty: objective measurement approach and subjective measurement approach.

Finally, I offered a customer loyalty measurement framework to help companies think about customer loyalty more broadly and help them identify customer loyalty metrics to help them better measure and manage different types of business growth: acquiring new customers (Advocacy), retaining existing customers (Retention) and expanding the relationship of existing customers (Purchasing).

One of the biggest limitations in the field of customer experience management is the lack of a coherent set of clearly defined variables with instruments to effectively measure those variables. When we talk about customer loyalty, we talk past each other rather than to each other. To advance our field and our understanding of what procedures and methods work, we need precision in ideas and words. One way to start is to clearly define and measure constructs like customer loyalty. While customer loyalty is one such vaguely defined and measured variable, our field is full of others (e.g., customer engagement, employee engagement). I hope I was able to provide some clarification on the notion of customer loyalty, both in its meaning and its measurement.

Originally Posted at: What is Customer Loyalty? Part 2: A Customer Loyalty Measurement Framework

Jan 24, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Statistically Significant  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> 5 Natural Language Processing Techniques for Extracting Information by analyticsweek

>> Big Data: Are you ready for blast-off? by analyticsweekpick

>> The Evolution Of The Geek [Infographics] by v1shal

Wanna write? Click Here

[ NEWS BYTES]

>>
 The Future of US Data Security in the Wake of GDPR – Innovation & Tech Today (satire) (press release) (blog) Under  Data Security

>>
 HPE’s Composable Infrastructure No Longer Limited to Synergy Hardware – Data Center Knowledge Under  Data Center

>>
 93% of Healthcare Execs Seeking Improved Data Analytics, CDI – Health IT Analytics Under  Health Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Machine Learning

image

6.867 is an introductory course on machine learning which gives an overview of many concepts, techniques, and algorithms in machine learning, beginning with topics such as classification and linear regression and ending … more

[ FEATURED READ]

Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython

image

Python for Data Analysis is concerned with the nuts and bolts of manipulating, processing, cleaning, and crunching data in Python. It is also a practical, modern introduction to scientific computing in Python, tailored f… more

[ TIPS & TRICKS OF THE WEEK]

Winter is coming, warm your Analytics Club
Yes and yes! As we are heading into winter what better way but to talk about our increasing dependence on data analytics to help with our decision making. Data and analytics driven decision making is rapidly sneaking its way into our core corporate DNA and we are not churning practice ground to test those models fast enough. Such snugly looking models have hidden nails which could induce unchartered pain if go unchecked. This is the right time to start thinking about putting Analytics Club[Data Analytics CoE] in your work place to help Lab out the best practices and provide test environment for those models.

[ DATA SCIENCE Q&A]

Q:How do you handle missing data? What imputation techniques do you recommend?
A: * If data missing at random: deletion has no bias effect, but decreases the power of the analysis by decreasing the effective sample size
* Recommended: Knn imputation, Gaussian mixture imputation

Source

[ VIDEO OF THE WEEK]

Big Data Introduction to D3

 Big Data Introduction to D3

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

He uses statistics as a drunken man uses lamp posts—for support rather than for illumination. – Andrew Lang

[ PODCAST OF THE WEEK]

@EdwardBoudrot / @Optum on #DesignThinking & #DataDriven Products #FutureOfData #Podcast

 @EdwardBoudrot / @Optum on #DesignThinking & #DataDriven Products #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

100 terabytes of data uploaded daily to Facebook.

Sourced from: Analytics.CLUB #WEB Newsletter

The convoluted world of data scientist

The convoluted world of data scientist.
The convoluted world of data scientist.

Data scientists are not dime a dozen and they are not in abundance as well. Buzz around bigdata has produced a job category that is not only confusing but has been costing companies a lot in their stride to look through the talent pool to dig for a so called data scientist. So, what exactly is the problem and why are we suddenly seeing a lot of data scientist emerging from nowhere with very different skill sets? To understand this we need to understand the bigdata phenomena.

With emergence of big data user companies like Google, Facebook, yahoo etc. and their amazing contribution to open source, new platforms have been developed to process too much data using commodity hardware in fast and yet, cost efficient ways. Now with that phenomenon, every company wants to get savvier when it comes to managing data to gain insights and ultimately building competitive edge over their competitors. But companies are used to understanding small pieces of data using their business analysts. But talk about more data and more tools. Who will fit in? So, they started on lookout for special breed of professional that have the capability to deal with big data and it’s hidden insights.

So, where is the problem here? The problem lies in the fact that only one job title emerged from this phenomenon- data scientist. The professionals who are currently practicing some data science via business analysis, data warehousing or data designing jumped on the bandwagon grabbing the title of the data scientist. What is interesting here is that data scientist job as explained above does not deserve a single job description so it should be handled accordingly. It was never a magical job title that has all the answers for any data curious organization, to be able to understand, develop and manage a data project.

Before we go into what companies should do, let’s reiterate what is a data scientist. As the name suggest, it is something to do with data and scientist. Which means it should include job description that has done some data engineering, data automation, and scientific computing with a hint of business capabilities. If we extrapolate, we are looking at a professional with computer science degree, doctorate in statistical computing and MBA in business. What would be luck in finding that candidate and by-the-way, they should have some industry domain expertise as well. What is the likelihood that such a talent exists? Rare. But, even if they are in abundance, companies should tackle this problem at much granular and sustainable scale. And one more thing to note here is that no two data scientist job requirements are the same. This means that your data scientist requirement could be extremely different from what anyone else is looking for in a data scientist. So, why should we have one title to cater to such a diverse category?

So, what should companies do? First it is important to understand that companies are building data scientists’ capabilities and should not be hiring the herd of data scientists. This means that companies/ hiring managers should understand that they are not looking for a particular individual but a team as a solution. It is important for businesses to clearly articulate those magic skillsets that their so-called data scientist should carry. Following this drill, companies should split the skillset into categories, Data analytics, Business analyst, data warehousing professionals, software developer, and data engineers to name a few. Finding a common island where business analysts, statistical computing modelers and data engineers work in harmony to address a system that handles big data is a great start. Think of it as putting together a central data office. Huh! another buzz word. Don’t worry; I will go into more details in the follow-up blogs. Think of it as a department where business, engineering and statistician work together on a common objective. Data science is nothing but an art to find value in lots of data. So, big-data is to build capability to parse/analyze lots of data. So, business should work through their laundry list of skillset. First identify internal resources that could accommodate that list. Following this, companies should form a hard matrix structure to prove the idea of set of people working together as a data scientist. BTW I am not saying that you need one individual from each category, but, together the team should have all the skills mentioned above

One important take away for companies is to understand that the moment they came across a so called data scientist, it is important to understand which side of data scientist the talent represents. Placing that talent in their respective silo will help provide a clearer vision when it comes to understanding the talent and understanding the void that could stay intact if the resources are not filled accordingly. So, living in this convoluted world of data scientist is hard and tricky. Having some chops into understanding data science as a talent, companies could really play the big data talent game to their advantage and lure some cutting edge people and grow sustainably.

Source

Why Is Big Data an Advantage for Your Business

Big Data is a large amount of raw data. The organizations collect, store and analyze data by using numerous ways in order to increase their productivity, adeptness and take better decisions. It can be classified into two major forms- structured and unstructured. Structured Data is easy to manage- analysis and organization is quite simple. On the other hand, unstructured data is a bit tough to handle- analysis takes time as it uses a variety of formats and cannot be easily interpreted using traditional data models and processes.

Unstructured Data is much harder to analyze and uses a variety of formats. It’s not just the big brands that can use big data to make data-driven decisions for their organization. Even the small businesses can reap the benefits too.

“Big data is defined as very large datasets that can be analyzed computationally to reveal patterns, trends, and associations – especially in connection with human behavior and interactions. A big data revolution has arrived with the growth of the Internet, wireless networks, smartphones, social media and other technology.”

Big Data Business Benefits

When the data volume continues to grow, its potential for business grows too. The reason behind is when Big Data management solutions evolve, it allows the businesses to convert raw data into relevant trends, predictions and projections with unprecedented accuracy. The businesses that use Big Data analytic solutions greatly benefit. It helps businesses gain more insights which help in driving smart decision-making.

Here are a few benefits of using Big Data- 

  1. Boosts business efficiency– Using Big Data analytics will give a boost to your business efficiency. It will help enhance the productivity and help you make right decisions which will improve your business efficiency. By using tools such as Google Earth, Google Maps, and social media, you can complete a number of tasks right at your desk without spending on travel. Moreover, these tools are relatively easy to use and do not need much of technical knowledge.
  2. Cost-saving– In a recent article published by Tech Cocktail- “it was illustrated how Twiddy& Company Realtors cut their operating costs by 15 percent. The company compared maintenance charges for contractors against the average of its other vendors. Through this process, the company identified and eliminated invoice-processing errors and automated service schedules.”
  3. Improves pricing– With the help of business intelligence tools that have been tried and tested, a business can evaluate its finances and get a clear picture of where it stands in the market. This will help a business strategize new policies and techniques to expand its operations and improve pricing.
  4. Reduces time– High speed tools such as Hadoop and in-memory analytics will help identify the new sources of data. Once you are able to analyze the data in a quick time, it will save your time and lets you make quick decisions, the decisions that are right for your business.
  5. Understand market value– Big data helps you get a clearer picture of the market conditions. For instance- By analyzing purchasing behavior of the customers’ like which product sells the most and is liked by customers. This will give you an idea on which products meet customers’ satisfying needs’ and the products that need an area of improvement. Using this information will help in improving the product and lets you stay ahead of the competitors.
  6. Hire right employees– Using Big Data, the recruitment companies will scan candidate’s resumes and LinkedIn profiles that will help businesses find the right employees. The hiring process is no longer based on what candidate looks like on paper. So, using Big Data will help you find the right employees.

Here are a few facts that will help you understand why you need Big Data-

  1. 94% of marketing professionals said personalization of customer experienceis verycrucial.
  2. $30 million in annual savings by influencing social media data in claims andfraud
  3. By 2020, 66% of banks will have blockchainin commercial production and at scale.
  4. Organizations will rely on smart datamore than big data.
  5. Machine-to-Human (M2H)enterprise interactions will be humanized by up to 85% by 2020.
  6. Businesses are investing 300% more in Artificial Intelligence (AI)in 2017 than they did in 2016.
  7. 25% growth rate in the emergence of speech as a relevant source of unstructured data.  
  8. Right to be Forgotten (R2BF)will be in-focus globally regardless of data source.
  9. The 43% of customer service teams that don’t have real-time analyticswill continue to shrink.  
  10. By 2020, the Augmented Reality (AR)market will reach $90 billion compared to Virtual Reality’s $30 billion.

Final Thoughts

The use of Big Data in business operations is vital- it will not only help in business growth but allows the companies to stand above their competitors. Big Data provides an enormous amount of information about different products, services, consumers, suppliers and more which help a business understand the process and optimize its operations to accomplish business objectives.

Looking for professional web design/development company to convert visitors into leads, please visit this link: “https://www.softprodigy.us/” “

Source: Why Is Big Data an Advantage for Your Business

Jan 17, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Statistics  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> 10 Techniques to Boost Your Data Modeling by analyticsweek

>> BDAS Analytics Suite Blends Big Data With HR by analyticsweekpick

>> Video: R and Python in in Azure HDInsight by analyticsweekpick

Wanna write? Click Here

[ NEWS BYTES]

>>
 Choosing the Right Data Catalog for Your Business – insideBIGDATA Under  Business Analytics

>>
 AMA Updates Population Health Tool to Improve Patient Care Access – Health IT Analytics Under  Health Analytics

>>
 Global Streaming Analytics Market 2018 Revenue, Potential Growth, Analysis, Price, Market Share, Growth Rate … – The West Chronicle (press release) (blog) Under  Streaming Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Applied Data Science: An Introduction

image

As the world’s data grow exponentially, organizations across all sectors, including government and not-for-profit, need to understand, manage and use big, complex data sets—known as big data…. more

[ FEATURED READ]

Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython

image

Python for Data Analysis is concerned with the nuts and bolts of manipulating, processing, cleaning, and crunching data in Python. It is also a practical, modern introduction to scientific computing in Python, tailored f… more

[ TIPS & TRICKS OF THE WEEK]

Analytics Strategy that is Startup Compliant
With right tools, capturing data is easy but not being able to handle data could lead to chaos. One of the most reliable startup strategy for adopting data analytics is TUM or The Ultimate Metric. This is the metric that matters the most to your startup. Some advantages of TUM: It answers the most important business question, it cleans up your goals, it inspires innovation and helps you understand the entire quantified business.

[ DATA SCIENCE Q&A]

Q:What does NLP stand for?
A: * Interaction with human (natural) and computers languages
* Involves natural language understanding

Major tasks:
– Machine translation
– Question answering: “what’s the capital of Canada?”
– Sentiment analysis: extract subjective information from a set of documents, identify trends or public opinions in the social media

– Information retrieval

Source

[ VIDEO OF THE WEEK]

Advanced #Analytics in #Hadoop

 Advanced #Analytics in #Hadoop

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Torture the data, and it will confess to anything. – Ronald Coase

[ PODCAST OF THE WEEK]

#BigData #BigOpportunity in Big #HR by @MarcRind #JobsOfFuture #Podcast

 #BigData #BigOpportunity in Big #HR by @MarcRind #JobsOfFuture #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

As recently as 2009 there were only a handful of big data projects and total industry revenues were under $100 million. By the end of 2012 more than 90 percent of the Fortune 500 will likely have at least some big data initiatives under way.

Sourced from: Analytics.CLUB #WEB Newsletter

Best Practices For Building Talent In Analytics

Best practice pinned on noticeboard

Companies across all industries depend more and more on analytics and insights to run their businesses profitably. But, attracting, managing and retaining talented personnel to execute on those strategies remains a challenge. This is not the case for consumer products heavyweight The Procter & Gamble Company (P&G), which has been at the top of its analytics game for 50 years now.

During the 2014 Retail/Consumer Goods Analytics Summit, Glenn Wegryn, retired associate director of analytics for P&G, shared best practices for building the talent capabilities required to ensure success. A leadership council is in charge of sharing analytics best practices across P&G — breaking down silos to make sure the very best talent is being leveraged to solve the company’s most pressing business issues.

So, what are the characteristics of a great data analyst and where can you find them?

“I always look for people with solid quantitative backgrounds because that is the hardest thing to learn on the job,” said Wegryn.

Combine that with mature communication skills and a talent for business acumen and you’ve got the perfect formula for a great data analyst.

When it comes to sourcing analytics, Wegryn says companies have an important strategic decision to make: Do you build it internally, leveraging resources like consultants and universities? Do you buy it from a growing community of technology solution providers? Or, do you adopt a hybrid model?

“Given the explosion of business analytics programs across the country, your organization should find ample opportunities to tap into those resources,” advised Wegryn.

To retain and nurture your organization’s business analysts, Wegryn recommended creating a career path that grows and the importance of encouraging talented personnel internally until they reach a trusted CEO advisory role.

Wegryn also shared key questions an organization should ask to unleash the value of analytics, and suggested that analytics should always start and end with a decision.

“You make a decision in business that leads to action that gleans insights that leads to another decision,” he said. “While the business moves one way, the business analyst works backward in a focused, disciplined and controlled manner.”

Perhaps most importantly, the key to building the talent capability to ensure analytics success came from P&G’s retired chairman, president and CEO Bob McDonald: “… having motivation from the top helps.”

Wegryn agreed: “It really helps when the person at the top of the chain is driven on data.”

The inaugural Retail & Consumer Goods Analytics Summit event was held September 11-12, 2014 at the W Hotel in San Diego, California. The conference featured keynotes from retail and consumer goods leaders, peer-to-peer exchanges and relationship building.

Article originally appeared HERE.

Originally Posted at: Best Practices For Building Talent In Analytics by analyticsweekpick

Avoiding a Data Science Hype Bubble

In this post, Josh Poduska, Chief Data Scientist at Domino Data Lab, advocates for a common taxonomy of terms within the data science industry. The proposed definitions enable data science professionals to cut through the hype and increase the speed of data science innovation. 

Introduction

The noise around AI, data science, machine learning, and deep learning is reaching a fever pitch. As this noise has grown, our industry has experienced a divergence in what people mean when they say “AI”, “machine learning”, or “data science”. It can be argued that our industry lacks a common taxonomy. If there is a taxonomy, then we, as data science professionals, have not done a very good job of adhering to it. This has consequences. Two consequences include the creation of a hype-bubble that leads to unrealistic expectations and an increasing inability to communicate, especially with non-data science colleagues. In this post, I’ll cover concise definitions and then argue how it is vital to our industry that we be consistent with how we define terms like “AI”.

Concise Definitions

  • Data Science: A discipline that uses code and data to build models that are put into production to generate predictions and explanations.
  • Machine Learning: A class of algorithms or techniques for automatically capturing complex data patterns in the form of a model.
  • Deep Learning: A class of machine learning algorithms that uses neural networks with more than one hidden layer.
  • AI: A category of systems that operate in a way that is comparable to humans in the degree of autonomy and scope.

Hype

Our terms have a lot of star power. They inspire people to dream and imagine a better world which leads to their overuse. More buzz around our industry raises the tide that lifts all boats, right? Sure, we all hope the tide will continue to rise. But, we should work for a sustainable rise and avoid a hype bubble that will create widespread disillusionment if it bursts.

I recently attended Domino’s rev conference, a summit for data science leaders and practitioners. I heard multiple leaders seeking advice on how to help executives, mid-level managers, and even new data scientists have proper expectations of data science projects without sacrificing enthusiasm for data science. Unrealistic expectations slow down progress by deflating the enthusiasm when projects yield less than utopian results. They also make it harder than it should be to agree on project success metrics and ROI goals.

The frequent overuse of “AI” when referring to any solution that makes any kind of prediction has been a major cause of this hype. Because of frequent overuse, people instinctively associate data science projects with near perfect human-like autonomous solutions. Or, at a minimum, people perceive that data science can easily solve their specific predictive need, without any regard to whether their organizational data will support such a model.

Communication

Incorrect use of terms also gums up conversations. This can be especially damaging in the early planning phases of a data science project when a cross-functional team assembles to articulate goals and design the end solution. I know a data science manager that requires his team of data scientists to be literally locked in a room for an our hour with business leaders before he will approve any new data science project. Okay, the door is not literally locked, but it is shut, and he does require them to discuss the project for a full hour. They’ve seen a reduction in project rework as they’ve focused on early alignment with business stakeholders. The challenge of explaining data science concepts is hard enough as it is. We only make this harder when we can’t define our own terms.

I’ve been practicing data science for a long time now. I’ve worked with hundreds of analytical leaders and practitioners from all over the world. Since AI and deep learning came on the scene, I’ve increasingly had to pause conversations and ask questions to discover what people really mean when they use certain terms. For example, how would you interpret these statements which are based on conversations I’ve had?

  • “Our goal is to make our solution AI-driven within 5 years.”
  • “We need to get better at machine learning before we invest in deep learning.”
  • “We use AI to predict fraud so our customers can spend with confidence.”
  • “Our study found that organizations investing in AI realize a 10% revenue boost.”

Confusing, right?

One has to ask a series of questions to be able to understand what is really going on.

The most common term-confusion I hear is when someone talks about AI solutions, or doing AI, when they really should be talking about building a deep learning or machine learning model. It seems that far too often the interchange of terms is on purpose, with the speaker hoping to get a hype-boost by saying “AI”. Let’s dive into each of the definitions and see if we can come to an agreement on a taxonomy.

Data Science

First of all, I view data science as a scientific discipline, like any other scientific discipline. Take biology, for example. Biology encompasses a set of ideas, theories, methods, and tools. Experimentation is common. The biological research community is continually adding to the discipline’s knowledge base. Data science is no different. Practitioners do data science. Researchers advance the field with new theory, concepts, and tools.

The practice of data science involves marrying code (usually some statistical programming language) with data to build models. This includes the important and dominant initial steps of data acquisition, cleansing, and preparation. Data science models usually make predictions (e.g., predict loan risk, predict disease diagnosis, predict how to respond to a chat, predict what objects are in an image). Data science models can also explain or describe the world for us (e.g., which combination of factors are most influential in making a disease diagnosis, which customers are most similar to each other and how). Finally, these models are put into production to make predictions and explanations when applied to new data. Data science is a discipline that uses code and data to build models that are put into production to generate predictions and explanations.

It can be difficult to craft a definition for data science while, at the same time, distinguishing it from statistical analysis. I came to the data science profession via educational training in math and statistics as well as professional experience as a statistician. Like many of you, I was doing data science before it was a thing.

Statistical analysis is based on samples, controlled experiments, probabilities, and distributions. It usually answers questions about likelihood of events or the validity of statements. It uses different algorithms like t-test, chi-square, ANOVA, DOE, response surface designs, etc. These algorithms sometimes build models too. For example, response surface designs are techniques to estimate the polynomial model of a physical system based on observed explanatory factors and how they relate to the response factor.

One key point in my definition is that data science models are applied to new data to make future predictions and descriptions, or “put into production”. While it is true that response surface models can be used on new data to predict a response, it is usually a hypothetical prediction about what might happen if the inputs were changed. The engineers then change the inputs and observe the responses that are generated by the physical system in its new state. The response surface model is not put into production. It does not take new input settings by the thousands, over time, in batches or streams, and predict responses.

My data science definition is by no means fool-proof, but I believe putting predictive and descriptive models into production starts to capture the essence of data science.

Machine Learning

Machine learning as a term goes back to the 1950s. Today, it is viewed by data scientists as a set of techniques that are used within data science. It is a toolset or a class of techniques for building the models mentioned above. Instead of a human explicitly articulating the logic for a model, machine learning enables computers to generate (or learn) models on their own. This is done by processing an initial set of data, discovering complex hidden patterns in that data, and capturing those patterns in a model so they can be applied later to new data in order to make predictions or explanations. The magic behind this process of automatically discovering patterns lies in the algorithms. Algorithms are the workhorses of machine learning. Common machine learning algorithms include the various neural network approaches, clustering techniques, gradient boosting machines, random forests, and many more. If data science is a discipline like biology, then machine learning is like microscopy or genetic engineering. It is a class of tools and techniques with which the discipline is practiced.

Deep Learning

Deep learning is the easiest of these terms to define. Deep learning is a class of machine learning algorithms that uses neural networks with more than one hidden layer. Neural networks themselves date back to the 1950s. Deep learning algorithms have recently become very popular starting in the 1980s, with a lull in the 1990s and 2000s, followed by a revival in our decade due to relatively small tweaks in the way the deep networks were constructed that proved to have astonishing effects. Deep learning can be applied to a variety of use cases including image recognition, chat assistants, and recommender systems. For example, Google Speech, Google Photos, and Google Search are some of the original solutions built using deep learning.

AI

AI has been around for a long time. Long before the recent hype storm that has co-opted it with buzzwords. How do we, as data scientists, define it? When and how should we use it? What is AI to us? Honestly, I’m not sure anyone really knows. This might be our “emperor has no clothes” moment. We have the ambiguity and the resulting hype that comes from the promise of something new and unknown. The CEO of a well known data science company was recently talking with our team at Domino when he mentioned “AI”. He immediately caught himself and said, “I know that doesn’t really mean anything. I just had to start using it because everyone is talking about it. I resisted for a long time but finally gave in.”

That said, I’ll take a stab at it: AI is a category of systems that people hope to create which have the defining characteristic that they will be comparable to humans in the degree of autonomy and scope of operation.

To extend our analogy, if data science is like biology and machine learning is like genetic engineering, then AI is like disease resistance. It’s the end result, a set of solutions or systems that we are striving to create through the application of machine learning (often deep learning) and other techniques.

Here’s the bottom line. I believe that we need to draw a distinction between techniques that are part of AI solutions, AI-like solutions, and true AI solutions. This includes AI building blocks, solutions with AI-ish qualities, and solutions that approach human autonomy and scope. These are three separate things. People just say “AI” for all three far too often.

For example,

  • Deep learning is not AI. It is a technique that can be used as part of an AI solution.
  • Most data science projects are not AI solutions. A customer churn model is not an AI solution, no matter if it used deep learning or logistic regression.
  • A self driving car is an AI solution. It is a solution that operates with complexity and autonomy that approaches what humans are capable of doing.

Remember those cryptic statements from above? In each case I asked questions to figure out exactly what was going on under the hood. Here is what I found.

  • An executive said: “Our goal is to make our solution AI-driven within 5 years.”
    The executive meant: “We want to have a couple machine learning models in production within 5 years.”
  • A manager said: “We need to get better at machine learning before we invest in deep learning.”
    The manager meant: “We need to train our analysts in basic data science principles before we are ready to try deep learning approaches.”
  • A marketer said: “We use AI to predict fraud so our customers can spend with confidence.”
    The marketer meant: “Our fraud score is based on a logistic regression model that has been working well for years.”
  • An industry analyst said: “Our study found that organizations investing in AI realize a 10% revenue boost.”
    The industry analyst meant: “Organizations that have any kind of predictive model in production realize a 10% revenue boost.”

The Ask

Whether you 100% agree with my definitions or not, I think we can all agree that there is too much hype in our industry today, especially around AI. Each of us has seen how this hype limits real progress. I argue that a lot of the hype is from misuse of the terms of data science. My ask is that, as data science professionals, we try harder to be conscious of how we use these key terms, and that we politely help others who work with us learn to use these terms in the right way. I believe that the quicker we can iterate to an agreed-upon taxonomy and insist on adherence to it, the quicker we can cut through hype and increase our speed of innovation as we build the solutions of today and tomorrow.

The post Avoiding a Data Science Hype Bubble appeared first on Data Science Blog by Domino.

Source: Avoiding a Data Science Hype Bubble by analyticsweek

Jan 10, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Accuracy check  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> 6 Best Practices for Maximizing Big Data Value by analyticsweekpick

>> To Trust A Bot or Not? Ethical Issues in AI by tony

>> Inside CXM: New Global Thought Leader Hub for Customer Experience Professionals by bobehayes

Wanna write? Click Here

[ NEWS BYTES]

>>
 Sales Performance Management (SPM) Market 2018-2025: CAGR, Top Manufacturers, Drivers, Trends, Challenges … – The Dosdigitos (press release) (blog) Under  Sales Analytics

>>
 Teradata Vantage brings multiple data sources on one platform, hope to become core of large enterprises – The Indian Express Under  Talent Analytics

>>
 Duke Engineering Establishes Big Data, Precision Medicine Center – Health IT Analytics Under  Big Data Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Pattern Discovery in Data Mining

image

Learn the general concepts of data mining along with basic methodologies and applications. Then dive into one subfield in data mining: pattern discovery. Learn in-depth concepts, methods, and applications of pattern disc… more

[ FEATURED READ]

On Intelligence

image

Jeff Hawkins, the man who created the PalmPilot, Treo smart phone, and other handheld devices, has reshaped our relationship to computers. Now he stands ready to revolutionize both neuroscience and computing in one strok… more

[ TIPS & TRICKS OF THE WEEK]

Analytics Strategy that is Startup Compliant
With right tools, capturing data is easy but not being able to handle data could lead to chaos. One of the most reliable startup strategy for adopting data analytics is TUM or The Ultimate Metric. This is the metric that matters the most to your startup. Some advantages of TUM: It answers the most important business question, it cleans up your goals, it inspires innovation and helps you understand the entire quantified business.

[ DATA SCIENCE Q&A]

Q:What is an outlier? Explain how you might screen for outliers and what would you do if you found them in your dataset. Also, explain what an inlier is and how you might screen for them and what would you do if you found them in your dataset
A: Outliers:
– An observation point that is distant from other observations
– Can occur by chance in any distribution
– Often, they indicate measurement error or a heavy-tailed distribution
– Measurement error: discard them or use robust statistics
– Heavy-tailed distribution: high skewness, can’t use tools assuming a normal distribution
– Three-sigma rules (normally distributed data): 1 in 22 observations will differ by twice the standard deviation from the mean
– Three-sigma rules: 1 in 370 observations will differ by three times the standard deviation from the mean

Three-sigma rules example: in a sample of 1000 observations, the presence of up to 5 observations deviating from the mean by more than three times the standard deviation is within the range of what can be expected, being less than twice the expected number and hence within 1 standard deviation of the expected number (Poisson distribution).

If the nature of the distribution is known a priori, it is possible to see if the number of outliers deviate significantly from what can be expected. For a given cutoff (samples fall beyond the cutoff with probability p), the number of outliers can be approximated with a Poisson distribution with lambda=pn. Example: if one takes a normal distribution with a cutoff 3 standard deviations from the mean, p=0.3% and thus we can approximate the number of samples whose deviation exceed 3 sigmas by a Poisson with lambda=3

Identifying outliers:
– No rigid mathematical method
– Subjective exercise: be careful
– Boxplots
– QQ plots (sample quantiles Vs theoretical quantiles)

Handling outliers:
– Depends on the cause
– Retention: when the underlying model is confidently known
– Regression problems: only exclude points which exhibit a large degree of influence on the estimated coefficients (Cook’s distance)

Inlier:
– Observation lying within the general distribution of other observed values
– Doesn’t perturb the results but are non-conforming and unusual
– Simple example: observation recorded in the wrong unit (°F instead of °C)

Identifying inliers:
– Mahalanobi’s distance
– Used to calculate the distance between two random vectors
– Difference with Euclidean distance: accounts for correlations
– Discard them

Source

[ VIDEO OF THE WEEK]

#FutureOfData Podcast: Peter Morgan, CEO, Deep Learning Partnership

 #FutureOfData Podcast: Peter Morgan, CEO, Deep Learning Partnership

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Information is the oil of the 21st century, and analytics is the combustion engine. – Peter Sondergaard

[ PODCAST OF THE WEEK]

#FutureOfData with Rob(@telerob) / @ConnellyAgency on running innovation in agency

 #FutureOfData with Rob(@telerob) / @ConnellyAgency on running innovation in agency

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

And one of my favourite facts: At the moment less than 0.5% of all data is ever analysed and used, just imagine the potential here.

Sourced from: Analytics.CLUB #WEB Newsletter