Nov 14, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Insights  Source

[ AnalyticsWeek BYTES]

>> Big Data Introduction to D3  by v1shal

>> Aug 17, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..) by admin

>> Exploring the Structure of High-Dimensional Data with HyperTools in Kaggle Kernels by analyticsweek

Wanna write? Click Here

[ FEATURED COURSE]

Lean Analytics Workshop – Alistair Croll and Ben Yoskovitz

image

Use data to build a better startup faster in partnership with Geckoboard… more

[ FEATURED READ]

Big Data: A Revolution That Will Transform How We Live, Work, and Think

image

“Illuminating and very timely . . . a fascinating — and sometimes alarming — survey of big data’s growing effect on just about everything: business, government, science and medicine, privacy, and even on the way we think… more

[ TIPS & TRICKS OF THE WEEK]

Grow at the speed of collaboration
A research by Cornerstone On Demand pointed out the need for better collaboration within workforce, and data analytics domain is no different. A rapidly changing and growing industry like data analytics is very difficult to catchup by isolated workforce. A good collaborative work-environment facilitate better flow of ideas, improved team dynamics, rapid learning, and increasing ability to cut through the noise. So, embrace collaborative team dynamics.

[ DATA SCIENCE Q&A]

Q:What is the maximal margin classifier? How this margin can be achieved?
A: * When the data can be perfectly separated using a hyperplane, there actually exists an infinite number of these hyperplanes
* Intuition: a hyperplane can usually be shifted a tiny bit up, or down, or rotated, without coming into contact with any of the observations
* Large margin classifier: choosing the hyperplance that is farthest from the training observations
* This margin can be achieved using support vectors

Source

[ VIDEO OF THE WEEK]

Understanding #Customer Buying Journey with #BigData

 Understanding #Customer Buying Journey with #BigData

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

It’s easy to lie with statistics. It’s hard to tell the truth without statistics. – Andrejs Dunkels

[ PODCAST OF THE WEEK]

Understanding #BigData #BigOpportunity in Big HR by @MarcRind #FutureOfData #Podcast

 Understanding #BigData #BigOpportunity in Big HR by @MarcRind #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

According to Twitter’s own research in early 2012, it sees roughly 175 million tweets every day, and has more than 465 million accounts.

Sourced from: Analytics.CLUB #WEB Newsletter

You’re Invited: Data Monetization Workshop 2018

Juice is proud to announce that it will host the third annual Data Monetization Workshop on Thursday, March 29, 2018 at the Nashville Technology Council’s Tech Hill Commons. Created by local data expert Lydia Jones in 2016, the Data Monetization Workshop brings together some of the top data and analytics practitioners in the country to discuss how to deploy data monetization in a business setting.

This year’s workshop will feature speakers and panelists from companies such as BuildingFootprintUSA, Crystal Project Inc., Dawex, Digital Reasoning, and Uber. The topics covered will be: 

  • The Now: What are the opportunities and business models you should consider to monetize your data? E.g. enhancing existing products, new data products, data marketplaces.

  • The Future: How will emergent technologies such as IoT and AI unlock new opportunities and challenges for data monetization?

Prior to the workshop, attendees will have the option to attend a data storytelling seminar led by Juice Analytics employees. The seminar will showcase Juice’s unique method for quickly and easily creating data stories from a given data set. The workshop will conclude with an open bar networking event.

Attendees in the past have come from Florida, Texas, New York, Georgia, California, Canada, and Australia, and across industries such as healthcare, finance, retail, technology, and government. They are typically members of the C-Suite (including CEOs, CMOS, CFOs, CXOs, and CAOs) as well as data and analytics leaders, data scientists, investors, and data product development leaders, among others.

To learn more about this year’s Data Monetization Workshop, visit the link below. Please be sure to register in advance as seating is limited. We hope to see you there!

 

Learn More & register

Source by analyticsweek

Introduction to AzureKusto

By Hong Ooi and Alex Kyllo

This post is to announce the availability of AzureKusto, the R interface to Azure Data Explorer (internally codenamed “Kusto”), a fast, fully managed data analytics service from Microsoft. It is available from CRAN, or you can install the development version from GitHub via devtools::install_github("cloudyr/AzureKusto").

AzureKusto provides an interface (including DBI compliant methods for connecting to Kusto clusters and submitting Kusto Query Language (KQL) statements, as well as a dbplyr style backend that translates dplyr queries into KQL statements. On the administrator side, it extends the AzureRMR framework to allow for creating clusters and managing database principals.

Connecting to a cluster

To connect to a Data Explorer cluster, call the kusto_database_endpoint() function. Once you are connected, call run_query() to execute queries and command statements.

library(AzureKusto)

## Connect to a Data Explorer cluster with (default) device code authentication
Samples <- kusto_database_endpoint(
server="https://help.kusto.windows.net",
database="Samples") res <- run_query(Samples,
"StormEvents | summarize EventCount = count() by State | order by State asc") head(res) ## State EventCount ## 1 ALABAMA 1315 ## 2 ALASKA 257 ## 3 AMERICAN SAMOA 16 ## 4 ARIZONA 340 ## 5 ARKANSAS 1028 ## 6 ATLANTIC NORTH 188 # run_query can also handle command statements, which begin with a '.' character res <- run_query(Samples, ".show tables | count") res[[1]] ## Count ## 1 5

dplyr Interface

The package also implements a dplyr-style interface for building a query upon a tbl_kusto object and then running it on the remote Kusto database and returning the result as a regular tibble object with collect(). All the standard verbs are supported.

library(dplyr)
StormEvents <- tbl_kusto(Samples, "StormEvents")
q <- StormEvents %>%
    group_by(State) %>%
    summarize(EventCount=n()) %>%
    arrange(State)
show_query(q) ## database('Samples').['StormEvents'] ## | summarize ['EventCount'] = count() by ['State'] ## | order by ['State'] asc
collect
(q) ## # A tibble: 67 x 2 ## State EventCount ## ## 1 ALABAMA 1315 ## 2 ALASKA 257 ## 3 AMERICAN SAMOA 16 ## ...

DBI interface

AzureKusto implements a subset of the DBI specification for interfacing with databases in R.

The following methods are supported:

  • Connections: dbConnect, dbDisconnect, dbCanConnect
  • Table management: dbExistsTable, dbCreateTable, dbRemoveTable, dbReadTable, dbWriteTable
  • Querying: dbGetQuery, dbSendQuery, dbFetch, dbSendStatement, dbExecute, dbListFields, dbColumnInfo

It should be noted, though, that Data Explorer is quite different to the SQL databases that DBI targets. This affects the behaviour of certain DBI methods and renders other moot.

library(DBI)

Samples <- dbConnect(AzureKusto(),
                     server="https://help.kusto.windows.net",
                     database="Samples")

dbListTables(Samples)
## [1] "StormEvents"       "demo_make_series1" "demo_series2"     
## [4] "demo_series3"      "demo_many_series1"

dbExistsTable(Samples, "StormEvents")
##[1] TRUE

dbGetQuery(Samples, "StormEvents | summarize ct = count()")
##      ct
## 1 59066

If you have any questions, comments or other feedback, please feel free to open an issue on the GitHub repo.

And one more thing…

As of Build 2019, Data Explorer can also run R (and Python) scripts in-database. For more information on this feature, currently in public preview, see the Azure blog and the documentation article.

 

 

Source

Nov 07, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Fake data  Source

[ AnalyticsWeek BYTES]

>> Businesses must integrate Artificial Intelligence (AI) now or fall further behind by analyticsweekpick

>> Marketers must go big on data to compete on customer experience by analyticsweekpick

>> Tackling 4th Industrial Revolution with HR4.0 – Playcast – Data Analytics Leadership Playbook Podcast by v1shal

Wanna write? Click Here

[ FEATURED COURSE]

Artificial Intelligence

image

This course includes interactive demonstrations which are intended to stimulate interest and to help students gain intuition about how artificial intelligence methods work under a variety of circumstances…. more

[ FEATURED READ]

The Black Swan: The Impact of the Highly Improbable

image

A black swan is an event, positive or negative, that is deemed improbable yet causes massive consequences. In this groundbreaking and prophetic book, Taleb shows in a playful way that Black Swan events explain almost eve… more

[ TIPS & TRICKS OF THE WEEK]

Save yourself from zombie apocalypse from unscalable models
One living and breathing zombie in today’s analytical models is the pulsating absence of error bars. Not every model is scalable or holds ground with increasing data. Error bars that is tagged to almost every models should be duly calibrated. As business models rake in more data the error bars keep it sensible and in check. If error bars are not accounted for, we will make our models susceptible to failure leading us to halloween that we never wants to see.

[ DATA SCIENCE Q&A]

Q:Is it beneficial to perform dimensionality reduction before fitting an SVM? Why or why not?
A: * When the number of features is large comparing to the number of observations (e.g. document-term matrix)
* SVM will perform better in this reduced space

Source

[ VIDEO OF THE WEEK]

Jeff Palmucci @TripAdvisor discusses managing a #MachineLearning #AI Team

 Jeff Palmucci @TripAdvisor discusses managing a #MachineLearning #AI Team

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

It’s easy to lie with statistics. It’s hard to tell the truth without statistics. – Andrejs Dunkels

[ PODCAST OF THE WEEK]

Want to fix #DataScience ? fix #governance by @StephenGatchell @Dell #FutureOfData #Podcast

 Want to fix #DataScience ? fix #governance by @StephenGatchell @Dell #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

And one of my favourite facts: At the moment less than 0.5% of all data is ever analysed and used, just imagine the potential here.

Sourced from: Analytics.CLUB #WEB Newsletter

Big universe, big data, astronomical opportunity

30 Oct 2010 --- Open cluster Messier 39 in the constellation Cygnus. --- Image by © Alan Dyer/Stocktrek Images/Corbis
30 Oct 2010 — Open cluster Messier 39 in the constellation Cygnus. — Image by © Alan Dyer/Stocktrek Images/Corbis

Astronomical data is and has always been “big data”. Once that was only true metaphorically, now it is true in all senses. We acquire it far more rapidly than the rate at which we can process, analyse and exploit it. This means we are creating a vast global repository that may already hold answers to some of the fundamental questions of the Universe we are seeking.

Does this mean we should cancel our up-coming missions and telescopes – after all why continue to order food when the table is replete? Of course not. What it means is that, while we continue our inevitable yet budget limited advancement into the future, so we must also simultaneously do justice to the data we have already acquired.

In a small way we already doing this. Consider citizen science, where public participation in the analysis of archived data increases the possibility of real scientific discovery. It’s a natural evolution, giving those with spare time on their hands the chance to advance scientific knowledge.

However, soon this will not be sufficient. What we need is a new breed of professional astronomy data-miners eager to get their hands dirty with “old” data, with the capacity to exploit more readily the results and findings.

Thus far, human ingenuity, and current technology have ensured that data storage capabilities have kept pace with the massive output of the electronic stargazers. The real struggle is now figuring out how to search and synthesize that output.

The greatest challenges for tackling large astronomical data sets are:

Visualisation of astronomical datasets
Creation and utilisation of efficient algorithms for processing large datasets.
The efficient development of, and interaction with, large databases.
The use of “machine learning” methodologies
The challenges unique to astronomical data are borne out of the characteristics of big data. The three Vs: volume – amount of data, variety – complexity of data and the sources that it is gathered from and velocity – rate of data and information flow. It is a problem that is getting worse.

In 2004, the data I used for my Masters had been acquired in the mid-1990s by the United Kingdom Infra-Red Telescope (UKIRT), Hawaii. In total it amounted a few 10s of Gigabytes.

Moving onward just a matter of months to my PhD, I was studying data taken from one the most successful ground based surveys in the history of astronomy, the Sloan Digital Sky Survey (SDSS). The volume of data I was having to cope with was orders of magnitude more.

SDSS entered routine operations in 2000. At the time of Data Release 12 (DR12) in July 2014 the total volume of that release was 116TB. Even this pales next to the Large Synoptic Survey Telescope (LSST). Planned to enter operation in 2022, it is aiming to gather 30TB a night.

To make progress with this massive data set, astronomy must embrace a new era of data-mining techniques and technologies. These include the application of artificial intelligence, machine learning, statistics, and database systems, to extract information from a data set and transform it into an understandable structure for further use.

Now while many scientists find themselves focused on solving these issues, let’s just pull back a moment and ask the tough questions. For what purpose are we gathering all this new data? What value do we gain from just collecting it? For that matter, have we learned all that we can from the data that we have?

It seems that the original science of data, astronomy, has a lot to learn from the new kid on the block, data science. Think about it. What if, as we strive to acquire and process more photons from across the farther reaches of the universe, from ever more exotic sources with even more complex instrumentation, that somewhere in a dusty server on Earth, the answers are already here, if we would just only pick up that dataset and look at it … possibly for the first time.

Dr Maya Dillon is the community manager for Pivigo. The company supports analytical PhDs making the transition into the world of Data Science and also runs S2DS: Europe’s largest data science boot-camp.

To read the original article on The Guardian, click here.

Originally Posted at: Big universe, big data, astronomical opportunity by analyticsweekpick

Oct 31, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Convincing  Source

[ AnalyticsWeek BYTES]

>> The Quantum disruption in Global Business driven by The Big Analytics by analyticsweekpick

>> Hacking journalism: Data science in the newsroom by analyticsweekpick

>> 7 Things to Look before Picking Your Data Discovery Vendor by v1shal

Wanna write? Click Here

[ FEATURED COURSE]

Learning from data: Machine learning course

image

This is an introductory course in machine learning (ML) that covers the basic theory, algorithms, and applications. ML is a key technology in Big Data, and in many financial, medical, commercial, and scientific applicati… more

[ FEATURED READ]

Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 4th Edition

image

The eagerly anticipated Fourth Edition of the title that pioneered the comparison of qualitative, quantitative, and mixed methods research design is here! For all three approaches, Creswell includes a preliminary conside… more

[ TIPS & TRICKS OF THE WEEK]

Save yourself from zombie apocalypse from unscalable models
One living and breathing zombie in today’s analytical models is the pulsating absence of error bars. Not every model is scalable or holds ground with increasing data. Error bars that is tagged to almost every models should be duly calibrated. As business models rake in more data the error bars keep it sensible and in check. If error bars are not accounted for, we will make our models susceptible to failure leading us to halloween that we never wants to see.

[ DATA SCIENCE Q&A]

Q:How to efficiently scrape web data, or collect tons of tweets?
A: * Python example
* Requesting and fetching the webpage into the code: httplib2 module
* Parsing the content and getting the necessary info: BeautifulSoup from bs4 package
* Twitter API: the Python wrapper for performing API requests. It handles all the OAuth and API queries in a single Python interface
* MongoDB as the database
* PyMongo: the Python wrapper for interacting with the MongoDB database
* Cronjobs: a time based scheduler in order to run scripts at specific intervals; allows to bypass the “rate limit exceed” error

Source

[ VIDEO OF THE WEEK]

Future of HR is more Relationship than Data - Scott Kramer @ValpoU #JobsOfFuture #Podcast

 Future of HR is more Relationship than Data – Scott Kramer @ValpoU #JobsOfFuture #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Hiding within those mounds of data is knowledge that could change the life of a patient, or change the world. – Atul Butte, Stanford

[ PODCAST OF THE WEEK]

Solving #FutureOfWork with #Detonate mindset (by @steven_goldbach & @geofftuff) #JobsOfFuture #Podcast

 Solving #FutureOfWork with #Detonate mindset (by @steven_goldbach & @geofftuff) #JobsOfFuture #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

40% projected growth in global data generated per year vs. 5% growth in global IT spending.

Sourced from: Analytics.CLUB #WEB Newsletter

@TimothyChou on World of #IOT & Its #Future Part 2 #FutureOfData #Podcast

[youtube https://www.youtube.com/watch?v=MYrijdCA0QY]

In this last part of two part podcast @TimothyChou discussed the Internet of Things landscape’s future. He laid out how internet has always been about internet of things and not internet of people. He sheds light on internet of things as it is spread across themes of things, connect, collect, learn and do workflows. He builds an interesting case about achieving precision to introduction optimality.

Timothy’s Recommended Read:
Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark http://amzn.to/2Cidyhy
Zone to Win: Organizing to Compete in an Age of Disruption Paperback by Geoffrey A. Moore http://amzn.to/2Hd5zpv

Podcast Link:
iTunes: http://math.im/itunes
GooglePlay: http://math.im/gplay

Timothy’s BIO:
Timothy Chou has his career spanning through academia, successful (and not so successful) startups and large corporations. He was one of only a few people to hold the President title at Oracle. As President of Oracle On Demand he grew the cloud business from it’s very beginning. Today that business is over $2B. He wrote about the move of applications to the cloud in 2004 in his first book, “The End of Software”. Today he serves on the board of Blackbaud, a nearly $700M vertical application cloud service company.

After earning his PhD in EE at the University of Illinois he went to work for Tandem Computers, one of the original Silicon Valley startups. Had he understood stock options he would have joined earlier. He’s invested in and been a contributor to a number of other startups, some you’ve heard of like Webex, and others you’ve never heard of but were sold to companies like Cisco and Oracle. Today he is focused on several new ventures in cloud computing, machine learning and the Internet of Things.

About #Podcast:
#FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

Wanna Join?
If you or any you know wants to join in,
Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor?
Email us @ info@analyticsweek.com

Keywords:
#FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Originally Posted at: @TimothyChou on World of #IOT & Its #Future Part 2 #FutureOfData #Podcast

Try these Customer Loyalty Questions for your Relationship Survey

Customer loyalty is the leading indicator of business growth. In fact, a main reason why companies implement voice of the customer (VoC) initiatives is to improve customer loyalty. Based on a 2010 study by Gleanster, asking 276 companies about their customer feedback management initiative, a majority of the loyalty leading companies said they implemented their program to increase customer loyalty, increase customer retention and increase customer satisfaction.

There are many different ways customers can engage in loyalty behaviors toward your company or brand. They can remain a customer for a long time. They can recommend you to their colleagues and friends. They can even show their loyalty by purchasing additional products/services from you. These loyalty behaviors, in turn, drive different types of business growth: overall customer growth, new customer growth, and average revenue per customer.

Customer relationship surveys, the foundation of many VoC programs, are used to measure customer loyalty, along with other important customer variables (e.g., satisfaction with their experience).  Including the right loyalty questions in your customer survey is essential to an effective VoC program. Companies use these surveys to understand and diagnose problem areas that, when fixed, will increase customer loyalty.

Not all Loyalty Questions are Created Equal

I have developed a set of customer loyalty questions that measure different types of customer loyalty. These loyalty questions have been shown to be predictive of different types of business growth and can be grouped into three sets of loyalty behaviors: retention, advocacy and purchasing. Each set of loyalty behaviors contains specific loyalty questions. Research shows that questions that fall into the same set are essentially interchangeable because they measure the same thing. Some of these customer loyalty questions appear below.

Retention Loyalty: the extent to which customers remain customers and/or do not use a competitor

  • How likely are you to switch to another provider? (0 – Not at all likely to 10 – Extremely likely)
  • How likely are you to renew your service contract? (0 – Not at all likely to 10 – Extremely likely)

Advocacy Loyalty: the extent to which customers advocate your product and/or brand

  • How likely are you to recommend us to your friends/colleagues? (0 – Not at all likely to 10 – Extremely likely)
  • Overall, how satisfied are you with our performance? (0 – Extremely dissatisfied to 10 – Extremely satisfied)

Purchasing Loyalty: the extent to which customers increase their purchasing behavior

  • How likely are you to purchase different solutions from us in the future? (0 – Not at all likely to 10 – Extremely likely)
  • How likely are you to expand the use of our products throughout company? (0 – Not at all likely to 10 – Extremely likely)

Using Different Types of Loyalty Questions

Selecting the right customer loyalty questions for your survey requires careful thought about your customers and your business. Think about how your customers are able to show their loyalty toward your company and include loyalty questions that reflect those loyalty behaviors you want to manage and change. Additionally, consider your business growth strategy and current business environment. Think about current business challenges and select loyalty questions that will help you address those challenges. For example, if you have a high churn rate, you might consider using a retention loyalty question to more effectively identify solutions to increase customer retention. Additionally, if you are interested in increasing ARPU (average revenue per customer), you might consider including a purchasing loyalty question.

Using a comprehensive set of loyalty questions will help you target solutions to optimize different types of customer loyalty of existing customers and, consequently, improve business growth. Including a “likelihood to quit” question and a “likelihood to buy different” question can help you understand why customer are leaving and identify ways to increase customers’ purchasing behavior, respectively.

Customers can engage in a variety of loyalty behaviors. Companies need to think about customer loyalty more broadly and include different types of loyalty questions that meet their specific business needs and comprehensively capture important loyalty behaviors.

Source: Try these Customer Loyalty Questions for your Relationship Survey

Oct 24, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Data Mining  Source

[ AnalyticsWeek BYTES]

>> Does Your Embedded Analytics “Play Nice” with DevOps? [Infographic] by analyticsweek

>> Validating a Lostness Measure by analyticsweek

>> Future of HR is more Relationship than Data – Scott Kramer #JobsOfFuture #Podcast by v1shal

Wanna write? Click Here

[ FEATURED COURSE]

Lean Analytics Workshop – Alistair Croll and Ben Yoskovitz

image

Use data to build a better startup faster in partnership with Geckoboard… more

[ FEATURED READ]

Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython

image

Python for Data Analysis is concerned with the nuts and bolts of manipulating, processing, cleaning, and crunching data in Python. It is also a practical, modern introduction to scientific computing in Python, tailored f… more

[ TIPS & TRICKS OF THE WEEK]

Data Analytics Success Starts with Empowerment
Being Data Driven is not as much of a tech challenge as it is an adoption challenge. Adoption has it’s root in cultural DNA of any organization. Great data driven organizations rungs the data driven culture into the corporate DNA. A culture of connection, interactions, sharing and collaboration is what it takes to be data driven. Its about being empowered more than its about being educated.

[ DATA SCIENCE Q&A]

Q:What is principal component analysis? Explain the sort of problems you would use PCA for. Also explain its limitations as a method?

A: Statistical method that uses an orthogonal transformation to convert a set of observations of correlated variables into a set of values of linearly uncorrelated variables called principal components.

Reduce the data from n to k dimensions: find the k vectors onto which to project the data so as to minimize the projection error.
Algorithm:
1) Preprocessing (standardization): PCA is sensitive to the relative scaling of the original variable
2) Compute covariance matrix ?
3) Compute eigenvectors of ?
4) Choose kk principal components so as to retain xx% of the variance (typically x=99)

Applications:
1) Compression
– Reduce disk/memory needed to store data
– Speed up learning algorithm. Warning: mapping should be defined only on training set and then applied to test set

2. Visualization: 2 or 3 principal components, so as to summarize data

Limitations:
– PCA is not scale invariant
– The directions with largest variance are assumed to be of most interest
– Only considers orthogonal transformations (rotations) of the original variables
– PCA is only based on the mean vector and covariance matrix. Some distributions (multivariate normal) are characterized by this but some are not
– If the variables are correlated, PCA can achieve dimension reduction. If not, PCA just orders them according to their variances

Source

[ VIDEO OF THE WEEK]

@JohnTLangton from @Wolters_Kluwer discussed his #AI Lead Startup Journey #FutureOfData #Podcast

 @JohnTLangton from @Wolters_Kluwer discussed his #AI Lead Startup Journey #FutureOfData #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Hiding within those mounds of data is knowledge that could change the life of a patient, or change the world. – Atul Butte, Stanford

[ PODCAST OF THE WEEK]

Dave Ulrich (@dave_ulrich) talks about role / responsibility of HR in #FutureOfWork #JobsOfFuture #Podcast

 Dave Ulrich (@dave_ulrich) talks about role / responsibility of HR in #FutureOfWork #JobsOfFuture #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

30 Billion pieces of content shared on Facebook every month.

Sourced from: Analytics.CLUB #WEB Newsletter

Embedded Analytics: The Build vs Buy Debate is Pointless

As embedded analytics become increasingly prominent in the business intelligence (BI) landscape, the question of whether companies should build or buy embedded BI applications seems to be more relevant than ever. The numerous attempts to answer this question ignore the basic fact that the question itself is misleading since for most organizations there is not a simple yes-or-no answer. Instead, best practices for embedded analytics are neither “build” nor “buy” — but is, in fact, more akin to partnership.

Understanding the Question

“Embedded analytics” is a blanket term that describes the integration of various features of business intelligence tools into other applications (often, but not exclusively, in SaaS). For example, a company that develops CRM software might want to provide more in-depth insights from the data it collects to either enhance the company’s general value proposition or to sell a premium service. Hence it may look to incorporate features such as data transformation, rapid big data querying or interactive visualizations to its own CRM software package.

Most professionals in the BI industry would agree that embedded reporting has become a major area of focus for both business and technology. Customers are demanding self-service, meaningful access to data, and competition is forcing companies to accommodate these demands, which in turn leads to more focus on building these types of capabilities.

embedded analytics

In-House or Out-of-the-Box

The question of “to build or not to build” has become the subject of heated discussions when considering an embedded analytics project. Run a quick Google search for “build vs buy embedded analytics,” and you’ll be bombarded with page after page of articles asking and attempting to answer this exact question. I will briefly present the most common arguments for each side of the debate:

Developing BI features in-house gives companies more flexibility and control over the end product. The original application developer is the most intimately familiar with its product and customers, and so will be able to tailor a solution more precisely. Building BI features in-house, however, requires a significant investment and often yields sub-par results due to the level of investment required and the need for specialized skills.

Buying an “out-of-the-box” solution enables a company to leverage the massive investments already made by the BI provider and gives access to state-of-the-art BI capabilities.
In a majority of cases, companies that seek to provide meaningful data analysis capabilities to their customers would be better off looking to embed an existing product rather than starting from scratch. However, what I would like to stress is that the way this question is posed is in itself misleading: by far, the more common — and preferable — scenario is actually neither build nor buy, but a third solution that could more accurately be described as partnership.

Business Intelligence is Not a Commodity Product (Yet)

When people talk about “build vs buy,” one might get the impression that the option exists to go online and buy a turnkey embedded BI solution, which one can easily plug into an existing product and presto! Instant customer-facing analytics. Sadly, when it comes to more sophisticated needs and products, this is almost never the case.

I do not mean to imply that BI implementations need to be lengthy or difficult affairs, but merely that each implementation is different. A company that typically wants to present a hundred thousand rows of data to its customers does not need the same technological “muscle” as one that works with a hundred million rows; likewise, data that comes from dozens of structured and unstructured sources is quite different than neatly-organized tables in a SQL database. High-level data visualization is one thing (for example, an e-commerce app that displays traffic and sales to sellers), whereas advanced analytics, drill-downs, and customizable reports require entirely different capabilities.

When it comes to these types of more advanced use cases, the notion of a one-size-fits-all solution is unrealistic: the analytical features will need to be integrated into the existing application and customized to meet the exact needs of the specific product and customer base in terms of data modeling, security, management and reporting. Again, this is not to say that these integration efforts need to be overly complicated or require extensive development resources — however, they will require an understanding of the underlying data, and the ability to easily customize and communicate with the BI platform via API access.

Partnership, Not a One-Time Transaction

The decision to use an external provider for embedding analytics is more similar to a partnership than to a “get it and forget it” type of purchase. The developer and the BI provider work together to build the required data product, and continue to collaborate as products mature, new features are added and new needs arise.

Does this mean that the developer will have to rely on the BI provider for every change or customization? Absolutely not — developers should have complete independence and control over their own product. They should be the sole owner of the product, from end to end, and be able to develop it on their own, without having to rely on a vendor’s professional services or external consultants. In order to achieve such an outcome, developers should partner with a BI vendor that is an enabler, always keeping developers in mind. Best practices include maintenance of a comprehensive SDK, with excellent documentation, and designing the BI product as an open platform.

Open platforms enable easy access via commonly used APIs, ensuring the BI software is flexible enough to integrate with the developers’ existing systems seamlessly, and accommodating specific needs and requirements around data sources, security and similar considerations. And for the truly complex, heavyweight implementations — top BI vendors provide the professional resources needed to get customers up and running as fast as possible and to address the various maintenance issues that inevitably arise.

Furthermore, both parties should see their relationship as long term — new features introduced in the BI platform should always be built in an “API-first” approach, enabling application developers to quickly and easily incorporate these features into their own offering; communication between the BI vendor and the application developer needs to be open and frequent so that both can gain a better understanding of the other’s strengths and limitations and adjust development, support and account management efforts accordingly.

Understanding embedded analytics as an ongoing partnership, rather than a one-off purchase, will lead developers to ask more relevant questions before embarking on an embedded BI project; and lead BI providers to make a serious commitment to building truly open platforms, maintaining superb customer service and documentation. In such cases, everyone stands to benefit.

embedded analytics

Source: Embedded Analytics: The Build vs Buy Debate is Pointless by analyticsweek