How to Define KPIs for Successful Business Intelligence

Realizing that you can only improve what you measure is a good way to think about KPIs. Often companies want to improve different aspects of their business all at once, but can’t put a finger on what will measure their progress towards overarching company goals. Does it come down to comparing the growth of last year to this year? Or, is it just about the cost of acquiring new customers?

If you’re nervously wondering now, “wait, what is my cost per deal?”, don’t sweat it. Another growing pain of deciding on KPIs is discovering that there is a lot of missing information.

Defining Your KPIs

Choosing the right KPI is crucial to make effective, data-driven decisions. If you choose the right KPI, it will help to concentrate the efforts of employees towards a meaningful goal, however, choose incorrectly and you could waste significant resources chasing after vanity metrics.

In order to rally the efforts of your team and achieve your long-term objectives, you have to measure the right things. For example, if the goal is to increase revenue at a SaaS company by 25% over the next two quarters, you couldn’t determine success by focusing on the number of likes your Facebook page got. Instead, we could ask questions like: Are we protecting our ARR by retaining our existing customers? Do we want to look at the outreach efforts of our sales development representatives, and whether that results in increased demos and signups? Should we look at the impact of increased training for the sales team on closed deals?

Dashboard Design Banner

Similarly, if we wanted to evaluate the effectiveness of various marketing channels, we need to determine more than an end goal of increasing sales or brand awareness. Instead, we’ll need a more precise definition of success. This might include ad impressions, click through rates, conversion numbers, new email list subscribers, page visits, bounce rates, and much more.

Looking at all these factors will allow us to determine which channels are driving the most traffic and revenue. If we dig a bit deeper, there will be even more insights to discover. In addition to discovering which channels produce traffic most likely to translate into a conversion, we can also learn if other factors such as timing make a difference to reach our target audience.

Of course, every industry and business are different. To establish meaningful KPIs, you’ll need to determine what most clearly correlates with your company’s goals. Here are a few examples:

  • Finance – Working capital, Operating cash flow, Return on equity, Quick ratio, Debt to equity ratio, Inventory turnover, Accounts receivable turnover, Gross profit margin
  • Marketing – Customer acquisition cost, Conversion rate of a particular channel, Percentage of leads generated from a particular channel, Customer Churn, Dormant customers, Average spend per customer
  • Healthcare – Inpatient mortality rate, Bed turnover, Readmission rate, Average length of stay, Patient satisfaction, Total operating margin, Average cost per discharge, Cash receipt to bad debt, Claims denial rate
  • Retail – Gross margin (as a percentage of selling price), Inventory turnover, Sell-through percentage, Average sales per transaction, Percentage of total stock not displayed

If your business is committed to data-driven decision making, establishing the right KPIs is crucial. Although the process of building a performance-driven culture is iterative, clearly defining the desired end result will go a long way towards help you establish effective KPIs that will help focus the efforts of your team towards that goal, whether it’s to move product off shelves faster, create better patient outcomes, or increase your revenue per customer.

The good news is that in the business intelligence world, measuring performance can be especially precise, quick and easy. Yet, the first hurdle every data analyst faces is the initial struggle to choose and agree on company KPIs & KPI tracking. If you are about to embark on a BI project, here’s a useful guide on how to decide what it is that you want to measure:

Step 1: Isolate Pain Points, Identify Core Business Goals

A lot of companies start by trying to quantify their current performance. But again, as a data analyst, the beauty of your job and the power of business intelligence is that you can drill into an endless amount of very detailed metrics. From clicks, site traffic, and conversion rates, to service call satisfaction and renewals, the list goes on. So ask yourself: What makes the company better at what they do?

You can approach this question by focusing on stage growth, where a startup would focus most on metrics that validate business models, whereas an enterprise company would focus on metrics like customer lifetime value analysis. Or, you can examine this question by industry: a services company (consultancies) would focus more on quality of services rendered, whereas a company that develops products would focus on product usage.

Ready to dive in? Start by going from top-down through each department to elicit requirements and isolate the pain points and health factors for every department. Here are some examples of KPI metrics you may want to look at:

Product

  • Product related tickets
  • Customer satisfaction
  • Usage statistics (SaaS products)

Marketing KPIs

  • Brand awareness
  • Conversion rate
  • Site traffic
  • Social shares

R&D

  • Number of bugs
  • Length of development cycle
  • App usage

Step 2: Break It Down to A Few KPIs

Once you choose a few important KPIs, then try to break it down even further. Remember, while there’s no magic number, less is almost always more when it comes to KPIs. That’s because if you track too many KPIs, as a data analyst you may start to lose your audience and the focus of the common business user. Choosing the top 7-10 KPIs is a great number to aim for and you can do that by breaking down your core business goals into a much more specific metric.

Remember, the point of a KPI is to gain focus and align goals for measurable improvement. Spend more time choosing the KPIs than simply throwing too many into the mix, which will just push the question of focus further down the road (and require more work!).

Step 3: Carefully Assess Your Data

sisense-KPIs

After you have your main 7-10 elements – you can start digging into the data and start some data modeling. A good question to ask at this point is: How does the business currently make decisions? Counterintuitively, in order to answer that question you may want to look at where the company is currently not making its decisions based on data, or not collecting the right data.

This is where you get to flex your muscles as a “data hero” or a good analyst! Take every KPI and present it as a business question. Then break the business questions into facts, dimensions, filters, and order (example).

Not every business questions contain all of these elements – but there will always be a fact because you have to measure something. You’ll need to answer the following before moving on:

  • What are the data sources
  • Predict the complexity of your data model
  • Tools to prepare, manage and analyze data (BI solution)

Do this by breaking each KPI into its data components, asking questions like: what do I need to count, what do I need to aggregate, which filters need to apply? For each of these questions, you have to know which the data sources are being used and where the tables coming from.

Consider that data will often come from multiple, disparate data sources. For example, for information on a marketing or sales pipeline, you’ll probably need Google Analytics/Adwords data combined your CRM data. As a data analyst, it’s important to recognize that the most powerful KPIs often comes from a combination of multiple data sources. Make sure you are using the right tools, such as a BI tool that has built-in data connectors, to prepare and join data accurately easily.

Step 4: Represent KPIs in an Accurate and Effective Fashion

Congrats! You’ve connected your KPI data to your business. Now you’ll need to find a way to represent the metrics in the most effective way. Check out some of these different BI dashboard examples for some inspiration.

One tip to keep mind is that the goal of your dashboard is to put everyone on the same page. Still, users will each have their own questions and areas where they want to explore, which is why building an interactive, highly visual BI dashboards are important. Your BI solution should offer interactive dashboards that allow its users to perform basic analytical tasks, such as filtering the views, drilling down, and examining underlying data – all with little training.

See an example:


Profit & Loss - Financial Dashboard

Closing

As a data analyst you should always look for what other insights you can achieve with the data that the business never thought of asking. People are often entrenched in their own processes and as an analyst, you offer an “outsider’s perspective” of sorts, since you only see the data, while others are clouded by their day-to-day business tasks. Don’t be afraid to ask the hard questions. Start with the most basic and you’ll be surprised how big companies don’t know the answers–and you’ll be a data hero just for asking.

Dashboard Design Banner

Source: How to Define KPIs for Successful Business Intelligence by analyticsweek

Follow the Money: The Demand for Deep Learning

Numbers don’t lie.

According to CB Insights, 100 of the most promising private startups focused on Artificial Intelligence raised $11.7 billion in equity funding in 367 deals during 2017. Several of those companies focus on deep learning technologies, including the most well-funded, ByteDance, which accounts for over a fourth of 2017’s private startup funding with 3.1 billion dollars raised.

In the first half of last year alone, corporate venture capitalists contributed nearly 2 billion dollars of disclosed equity funding in 88 deals to AI startups, which surpassed the total financing for AI startups for all of 2016. The single largest corporate venture capitalist deal in the early part of 2017 was the $600 million Series D funding awarded to NIO, an organization based in China that specializes in autonomous vehicles (among other types of craft), which relies on aspects of deep learning.

According to Forrester, venture capital funding activity in computer vision increased at a CAGR of 137% from 2015 to 2017. Most aspects of advanced pattern recognition, including speech, image, facial recognition and others, hinge on deep learning. A Forbes post noted, “Google, Baidu, Microsoft, Facebook, Salesforce, Amazon, and all other major players are talking about – and investing heavily in – what has become known as “deep learning”. Indeed, both Microsoft and Google have created specific entities to fund companies specializing in AI.

According to Razorthink CEO Gary Oliver, these developments are indicative of a larger trend in which, “If you look at where the investments are going from the venture community, if you look at some of the recent reports that have come out, the vast majority are focused on companies that are doing deep learning.”

Endless Learning
Deep learning is directly responsible for many of the valuable insights organizations can access via AI, since it can rapidly parse through data at scale to discern patterns that are otherwise too difficult to see or take too long to notice. In particular, deep learning actuates the unsupervised prowess of machine learning by detecting data-driven correlations to business objectives for variables on which it wasn’t specifically trained. “That’s what’s kind of remarkable about deep learning,” maintained Tom Wilde, CEO of indico, which recently announced $4 million in new equity seed funding. “That’s why when we see it in action we’re always like whoa, that’s pretty cool that the math can decipher that.” Deep learning’s capacity for unsupervised learning makes is extremely suitable for analyzing semi-structured and unstructured data. Moreover, when it’s leveraged on the enormous datasets required for speech, image, or even video analysis, it provides these benefits at scale at speeds equal to modern business timeframes.

Hybridization
Although this unsupervised aspect of deep learning is one of its more renowned, it’s important to realize that deep learning is actually an advanced form of classic machine learning. As such, it was spawned from the latter despite the fact that its learning capabilities vastly exceed those of traditional machine learning. Nonetheless, there are still enterprise tasks which are suitable for traditional machine learning, and others which require deep learning. “People are aware now that there’s a difference between machine learning and deep learning, and they’re excited about the use cases deep learning can help,” Razorthink VP of Marketing Barbara Reichert posited. “We understand the value of hybrid models and how to apply both deep learning and machine learning so you get the right model for whatever problem you’re trying to solve.”

Whereas deep learning is ideal for analyzing big data sets with vast amounts of variables, classic machine learning persists in simpler tasks. A good example of this fact is its utility in data management staples such as data discovery, in which it can determine relationships between data and use cases. “Once the data is sent through those [machine learning algorithms] the relationships are predicted,” commented Io-Tahoe Chief Technology Officer Rohit Mahajan. “That’s where we have to fine-tune a patented data set that will actually predict the right relationships with the right confidence.”

Data Science
An examination of the spending on AI companies and their technologies certainly illustrates a prioritization of deep learning’s worth to contemporary organizations. It directly impacts some of the more sophisticated elements of AI including robotics, computer vision, and user interfaces based on natural language and speech. However, it also provides unequivocally tangible business value in its analysis of unstructured data, sizable data sets, and the conflation of the two. Additionally, by applying these assets of deep learning to common data modeling needs, it can automate and accelerate certain facets of data science that had previously proved perplexing to organizations.

“Applications in the AI space are making it such that you don’t need to be a data science expert,” Wilde said. “It’s helpful if you kind of understand it at a high level, and that’s actually improved a lot. But today, you don’t need to be a data scientist to use these technologies.”

Source by jelaniharper

Dec 06, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Correlation-Causation  Source

[ AnalyticsWeek BYTES]

>> The Last Layer of Cyber Security: Business Continuity and Disaster Recovery with Incremental Backups by jelaniharper

>> Boeing creates data analytics group by analyticsweekpick

>> How the Right Loyalty and Operational Metrics Drive Service Excellence – Webinar by bobehayes

Wanna write? Click Here

[ NEWS BYTES]

>>
 GSA Releases Customer Experience Playbook – Nextgov Under  Customer Experience

>>
 Global Healthcare Financial Analytics Market expected to reach at an Extensive rate through Growth analysis – RBTE Report Under  Financial Analytics

>>
 TTEC Announces Global Launch of Humanify™ Insights Platform – Directors Club Newswire (press release) (blog) Under  Sales Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Introduction to Apache Spark

image

Learn the fundamentals and architecture of Apache Spark, the leading cluster-computing framework among professionals…. more

[ FEATURED READ]

The Future of the Professions: How Technology Will Transform the Work of Human Experts

image

This book predicts the decline of today’s professions and describes the people and systems that will replace them. In an Internet society, according to Richard Susskind and Daniel Susskind, we will neither need nor want … more

[ TIPS & TRICKS OF THE WEEK]

Fix the Culture, spread awareness to get awareness
Adoption of analytics tools and capabilities has not yet caught up to industry standards. Talent has always been the bottleneck towards achieving the comparative enterprise adoption. One of the primal reason is lack of understanding and knowledge within the stakeholders. To facilitate wider adoption, data analytics leaders, users, and community members needs to step up to create awareness within the organization. An aware organization goes a long way in helping get quick buy-ins and better funding which ultimately leads to faster adoption. So be the voice that you want to hear from leadership.

[ DATA SCIENCE Q&A]

Q:Which kernels do you know? How to choose a kernel?
A: * Gaussian kernel
* Linear kernel
* Polynomial kernel
* Laplace kernel
* Esoteric kernels: string kernels, chi-square kernels
* If number of features is large (relative to number of observations): SVM with linear kernel ; e.g. text classification with lots of words, small training example
* If number of features is small, number of observations is intermediate: Gaussian kernel
* If number of features is small, number of observations is small: linear kernel

Source

[ VIDEO OF THE WEEK]

@AnalyticsWeek Panel Discussion: Finance and Insurance Analytics

 @AnalyticsWeek Panel Discussion: Finance and Insurance Analytics

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

In God we trust. All others must bring data. – W. Edwards Deming

[ PODCAST OF THE WEEK]

Understanding #FutureOfData in #Health & #Medicine - @thedataguru / @InovaHealth #FutureOfData #Podcast

 Understanding #FutureOfData in #Health & #Medicine – @thedataguru / @InovaHealth #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Poor data can cost businesses 20%–35% of their operating revenue.

Sourced from: Analytics.CLUB #WEB Newsletter

Are You Asking the Right Predictive Questions?

Predictive analytics works by learning the patterns that exist in your historical data, then using those patterns to predict future outcomes. For example, if you need to predict if a customer will pay late, you’ll feed data samples from customers who paid on time and data from those who have paid late into your predictive analytics algorithm.

>> Related: Predictive Analytics 101 <<

The process of feeding in historical data for different outcomes and enabling the algorithm to learn how to predict is called the training process. Once your algorithm determines a pattern, you pass on information about a new customer and it will make a prediction. But the first step is deciding what predictive questions you want to answer.

predictive

How do you know which predictive questions to ask?

When determining a predictive question, the rule of thumb is to base it on what you want to do with the answer.Following that logic, if we want to predict the number of late payments in a certain time frame—instead of if a particular person will pay late (as in the above example)—our predictive question should be: “How many customers will make late payments next month?”

Let’s look at a slightly more complex example. If we’re forecasting volume for a call center, our predictive question might be: “How many calls will I get tomorrow?” That is a forecasting/regression question (like the one in the example above). However, we could also ask a binary question such as: “Will I get more than 200 calls tomorrow?” That is a classification question because the answer will either be yes or no.

The predictive question you should ask will depend on what you are going to do with the information. If you have the staff to handle 200 calls, then you will likely want to know if you’ll get 200 calls or not (so you’d ask the classification question). But if your goal is to identify how many calls you are going to get tomorrow so that you can staff accordingly, you would ask the forecasting question.

Let’s apply this rule to a different industry. If you’re in sales and your monthly goal is 250 sales referrals, you would ask a classification question such as: “Will I get 250 referrals or more next month?” But if you simply want to know your expected referral volume, without taking into consideration any monthly goals, then you’d ask the forecasting/regression question: “How many sales referrals will I get in the next month?”

Over time, you’ll be able to run multiple algorithms to pick the one that works best with your data, or even use an ensemble of algorithms. You’ll also want to regularly retrain your learning model to keep up with fluctuations in your data based on based on the time of year, what activities your business has underway, and other factors. Set a timeline—maybe once a month or once a quarter—to regularly retrain your predictive analytics learning module to update the information.

To learn more about how predictive analytics can work for you, sign up for a free demo of Logi Predict >

 

Originally Posted at: Are You Asking the Right Predictive Questions?

Nov 29, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Pacman  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Data Scientists and the Practice of Data Science by bobehayes

>> Is Big Data The Most Hyped Technology Ever? by bobehayes

>> Apr 19, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..) by admin

Wanna write? Click Here

[ NEWS BYTES]

>>
 Diary Farmers of America invests in artificial intelligence – Fence Post Under  Artificial Intelligence

>>
 Global Big Data Security Market Global Market Demand, Growth, Opportunities, Top Key Players and Forecast to 2025 – Campus Telegraph Under  Big Data Security

>>
 Another View: Girls in STEM statistics are dismal, but here’s how we’re working to change that – Foster’s Daily Democrat Under  Statistics

More NEWS ? Click Here

[ FEATURED COURSE]

CPSC 540 Machine Learning

image

Machine learning (ML) is one of the fastest growing areas of science. It is largely responsible for the rise of giant data companies such as Google, and it has been central to the development of lucrative products, such … more

[ FEATURED READ]

How to Create a Mind: The Secret of Human Thought Revealed

image

Ray Kurzweil is arguably today’s most influential—and often controversial—futurist. In How to Create a Mind, Kurzweil presents a provocative exploration of the most important project in human-machine civilization—reverse… more

[ TIPS & TRICKS OF THE WEEK]

Analytics Strategy that is Startup Compliant
With right tools, capturing data is easy but not being able to handle data could lead to chaos. One of the most reliable startup strategy for adopting data analytics is TUM or The Ultimate Metric. This is the metric that matters the most to your startup. Some advantages of TUM: It answers the most important business question, it cleans up your goals, it inspires innovation and helps you understand the entire quantified business.

[ DATA SCIENCE Q&A]

Q:What does NLP stand for?
A: * Interaction with human (natural) and computers languages
* Involves natural language understanding

Major tasks:
– Machine translation
– Question answering: “what’s the capital of Canada?”
– Sentiment analysis: extract subjective information from a set of documents, identify trends or public opinions in the social media

– Information retrieval

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @DavidRose, @DittoLabs

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @DavidRose, @DittoLabs

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Numbers have an important story to tell. They rely on you to give them a voice. – Stephen Few

[ PODCAST OF THE WEEK]

@CRGutowski from @GE_Digital on Using #Analytics to #Transform Sales #FutureOfData #Podcast

 @CRGutowski from @GE_Digital on Using #Analytics to #Transform Sales #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Akamai analyzes 75 million events per day to better target advertisements.

Sourced from: Analytics.CLUB #WEB Newsletter

Dickson Tang (@imDicksonT) on Building a Career Edge over Robots using #3iFramework #JobsOfFuture #Podcast

 

In this podcast Dickson Tang shared his perspective on building a future and open mindset organization by working on it’s 3 Is: Individual, Infrastructure and Ideas. He shared his perspective on various organization types and individuals who could benefit from this 3iFramework, elaborated in details in his book: “Leadership for future of work: ways to build career edge over robots with human creativity book”. This podcast is great for anyone seeking to learn about ways to be open, innovative and change agent within an organization.

Dickson’s Book:

Leadership for future of work: 9 ways to build career edge over robots with human creativity by Dickson Tang amzn.to/2McxeIS

Dickson’s Recommended Read:
The Creative Economy: How People Make Money From Ideas by John Howkins amzn.to/2MdLotA

Podcast Link:
iTunes: math.im/jofitunes
Youtube: math.im/jofyoutube

Dickson’s BIO:
Dickson Tang is the author of Leadership for future of work: ways to build career edge over robots with human creativity book. He helps senior leaders (CEO, MD and HR) build creative and effective teams in preparation for the future / robot economy. Dickson is a leadership ideas expert, focusing on how leadership will evolve in the future of work. 15+ years of experience in management, business consulting, marketing, organizational strategies and training & development. Corporate experience with several leading companies such as KPMG Advisory, Gartner and Netscape Inc.

Dickson’s expertise on leadership, creativity and future of work have earned him invitations and opportunities to work with leaders and professionals from various organizations such as Cartier, CITIC Telecom, DHL, Exterran, Hypertherm, JVC Kenwood, Mannheim Business School, Montblanc and others.

He lives in Singapore, Asia.
LinkedIN: www.linkedin.com/in/imDicksonT
Twitter: www.twitter.com/imDicksonT
Facebook: www.facebook.com/imDicksonT
Youtube: www.youtube.com/channel/UC2b4BUeMnPP0fAzGLyEOuxQ

About #Podcast:
#JobsOfFuture is created to spark the conversation around the future of work, worker and workplace. This podcast invite movers and shakers in the industry who are shaping or helping us understand the transformation in work.

Wanna Join?
If you or any you know wants to join in,
Register your interest @ analyticsweek.com/

Want to sponsor?
Email us @ info@analyticsweek.com

Keywords:
#JobsOfFuture #FutureOfWork #FutureOfWorker #FutuerOfWorkplace #Work #Worker #Workplace

Source: Dickson Tang (@imDicksonT) on Building a Career Edge over Robots using #3iFramework #JobsOfFuture #Podcast

The Cost Of Too Much Data

Global Data
I came across this interesting infographics on “The Cost Of Too Much Data” from Lattice. It throws some light on lost dollars from lack of big-data initiative. Elaborating on the spread of data generation source and how productivity and dollars are lost from the lack of bigdata implemetation.

The Cost Of Too Much Data Infographic

Like this infographic? Get more sales and marketing information here: http://www.lattice-engines.com/resource-center/knowledge-hub

Originally Posted at: The Cost Of Too Much Data by v1shal

Nov 22, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data Accuracy  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Map of US Hospitals and their Health Outcome Metrics by bobehayes

>> Tips To Hunt For That Great Travel Deal  by v1shal

>> Google Cloud security updates for SEO before 2018 GDPR to change business data interactions! by thomassujain

Wanna write? Click Here

[ NEWS BYTES]

>>
 Enlisting Machine Learning to Fight Data Center Outages – Data Center Knowledge Under  Data Center

>>
 CHART Study Shows New Hot Spots in Mental Health, Substance Misuse Crisis – BioSpace (press release) (blog) Under  Health Analytics

>>
 Big data service gives charterers estimate of vessels’ fuel efficiency – Tanker Shipping and Trade Under  Big Data

More NEWS ? Click Here

[ FEATURED COURSE]

CPSC 540 Machine Learning

image

Machine learning (ML) is one of the fastest growing areas of science. It is largely responsible for the rise of giant data companies such as Google, and it has been central to the development of lucrative products, such … more

[ FEATURED READ]

Data Science from Scratch: First Principles with Python

image

Data science libraries, frameworks, modules, and toolkits are great for doing data science, but they’re also a good way to dive into the discipline without actually understanding data science. In this book, you’ll learn … more

[ TIPS & TRICKS OF THE WEEK]

Data Have Meaning
We live in a Big Data world in which everything is quantified. While the emphasis of Big Data has been focused on distinguishing the three characteristics of data (the infamous three Vs), we need to be cognizant of the fact that data have meaning. That is, the numbers in your data represent something of interest, an outcome that is important to your business. The meaning of those numbers is about the veracity of your data.

[ DATA SCIENCE Q&A]

Q:Name a few famous API’s (for instance GoogleSearch)
A: Google API (Google Analytics, Picasa), Twitter API (interact with Twitter functions), GitHub API, LinkedIn API (users data)…
Source

[ VIDEO OF THE WEEK]

@AnalyticsWeek Panel Discussion: Big Data Analytics

 @AnalyticsWeek Panel Discussion: Big Data Analytics

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Numbers have an important story to tell. They rely on you to give them a voice. – Stephen Few

[ PODCAST OF THE WEEK]

Dave Ulrich (@dave_ulrich) talks about role / responsibility of HR in #FutureOfWork #JobsOfFuture #Podcast

 Dave Ulrich (@dave_ulrich) talks about role / responsibility of HR in #FutureOfWork #JobsOfFuture #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

IDC Estimates that by 2020,business transactions on the internet- business-to-business and business-to-consumer – will reach 450 billion per day.

Sourced from: Analytics.CLUB #WEB Newsletter

Democratizing Self-Service Cognitive Computing Analytics with Machine Learning

There are few areas of the current data landscape that the self-service movement has not altered and positioned firmly within the grasp of the enterprise and its myriad users, from novices to the most accomplished IT personnel.

One can argue that cognitive computing and its self-service analytics have always been a forerunner of this effort, as their capability of integrating and analyzing disparate sources of big data to deliver rapid results with explanations and recommendations proves.

Historically, machine learning and its penchant for predictive analytics has functioned as the most accessible of cognitive computing technologies that include natural language processing, neural networks, semantic modeling and vocabularies, and other aspects of artificial intelligence. According to indico co-founder and CEO Slater Victoroff, however, the crux of machine learning’s utility might actually revolve around deep learning and, specifically, transfer learning.

By accessing these technologies at scale via the cloud, enterprises can now deploy cognitive computing analytics on sets of big data without data scientists and the inordinate volumes of data required to develop the models and algorithms that function at the core of machine learning.

From Machine Learning to Deep Learning
The cost, scale, and agility advantages of the cloud have resulted in numerous Machine Learning-as-a-Service vendors, some of which substantially enhance enterprise utility with Deep Learning-as-a-Service. Machine learning is widely conceived of as a subset of predictive analytics in which existing models of algorithms are informed by the results of previous ones, so that future models are formed quicker to tailor analytics according to use case or data type. According to Slater, deep learning algorithms and models “result in better accuracies for a wide variety of analytical tasks.” Largely considered a subset of machine learning, deep learning is understood as a more mature form of the former. That difference is conceptualized in multiple ways, including “instead of trying to handcraft specific rules to solve a given problem (relying on expert knowledge), you let the computer solve it (deep learning approach),” Slater mentioned.

Transfer Learning and Scalable Advantages
The parallel is completed with an analogy of machine learning likened to an infant and deep learning likened to a child. Whereas an infant must be taught everything, “a child has automatically learnt some approximate notions of what things are, and if you can build on these, you can get to higher level concepts much more efficiently,” Slater commented. “This is the deep learning approach.” That distinction in efficiency is critical in terms of scale and data science requirements, as there is a “100 to 100,000 ratio” according to Slater on the amounts of data required to form the aforementioned “concepts” (modeling and algorithm principles to solve business problems) with a deep learning approach versus a machine learning one. That difference is accounted for by transfer learning, a subset of deep learning that “lets you leverage generalized concepts of knowledge when solving new problems, so you don’t have to start from scratch,” Slater revealed. “This means that your training data sets can be one, two or even three orders of magnitude smaller in size and this makes a big difference in practical terms.”

Image and Textual Analytics on “Messy” Unstructured Data
Those practical terms expressly denote the difference between staffing multiple data scientists to formulate algorithms on exorbitant sets of big data, versus leveraging a library of preset models of service providers tailored to vertical industries and use cases. These models are also readily modified by competent developers. Providers such as indico offer these solutions for companies tasked with analyzing the most challenging “messy data sets”, as characterized by Slater. In fact, the vast forms of unstructured text and image analytics required of unstructured data is ideal for deep learning and transfer learning. “Messy data, by nature, is harder to cope with using handcrafted rules,” Slater observed. “In the case of images things like image quality, lighting conditions, etc. introduce noise. Sarcasm, double negatives, and slang are examples of noise in the text domain. Deep learning allows us to effectively work with real world noisy data and still extract meaningful signal.”

The foregoing library of models utilizing this technology can derive insight from an assortment of textual and image data including characteristics of personality, emotions, various languages, content filtering, and many more. These cognitive computing analytic capabilities are primed for social media monitoring and sentiment analysis in particular for verticals such as finance, marketing, public relations, and others.

Sentiment Analysis and Natural Language Processing
The difference with a deep learning approach is both in the rapidity and the granular nature of the analytics performed. Conventional natural language processing tools are adept at identifying specific words and spellings, and at determining their meaning in relation to additional vocabularies and taxonomies. NLP informed by deep learning can expand this utility to include entire phrases and a plethora of subtleties such as humor, sarcasm, irony and meaning that is implicit to native speakers of a particular language. Such accuracy is pivotal to gauging sentiment analysis.

Additionally, the necessity of image analysis as part of sentiment analysis and other forms of big data analytics is only increasing. Slater characterized this propensity of deep learning in terms of popular social media platforms such as Twitter, in which images are frequently incorporated. Image analysis can detect when someone is holding up a “guitar, and writes by it ‘oh, wow’,” Slater said. Without that image analysis, organizations lose the context of the text and the meaning of the entire post. Moreover, image analysis technologies can also discern meaning in various facial expressions, gestures, and other aspects of text that yield insight.

Cognitive Computing Analytics for All
The provisioning of cognitive computing analytics via MLaaS and DLaaS illustrates once again exactly how pervasive the self-service movement is. It also demonstrates the democratization of analytics and the fact that with contemporary technology, data scientists and massive sets of big data (augmented by expensive physical infrastructure) are not required to reap the benefits of some of the fundamental principles of cognitive computing and other applications of semantic technologies. Those technologies and their applications, in turn, are responsible for increasing the very power of analytics and of data-driven processes themselves.

In fact, according to Cambridge Semantics VP of Marketing John Rueter, many of the self-service facets of analytics that are powered by semantic technologies “are built for the way that we think and the way that we analyze information. Now, we’re no longer held hostage by the technology and by solving problems based upon a technological approach. We’re actually addressing problems with an approach that is more aligned with the way we think, process, and do analysis.”

Source