Clarifying Employee Engagement: A Review of Four Employee Engagement Measures

The concept of employee engagement is a popular one. I have seen many claims that companies with higher employee engagement have better outcomes (e.g., higher customer loyalty, increased employee performance, business growth) than companies who do not. Consultants even tout their own measure of employee engagement and present research to show its effectiveness. From the stuff I read about the benefits of employee engagement, I figured I should learn more about this area. Turns out, there is a lack of critical thinking when it comes to employee engagement.

I recently stumbled upon an excellent article by Bill Macey and Ben Schneider from Valtera. In their paper, The Meaning of Employee Engagement, the authors reviewed prior research that they felt best represented the conceptual space of employee engagement. They present a conceptual framework by which to understand this loose engagement concept, helping to clarify the different meanings of employee engagement. This useful framework not only helps us speak clearly about this engagement construct, it can help companies understand how this “employee engagement” construct is impacted by the work environment and how it relates to important business outcomes. I will present a brief summary of their work below along with my review of some measures of employee engagement. For those of you who are interested in learning more about the concept of employee engagement, I highly recommend you read the Macey and Schneider article.

The Employee Engagement Construct

Macey and Schneider found a commonality across the various definitions of employee engagement that reflect three things about the concept of engagement:

  1. Employee engagement is a desirable condition
  2. Employee engagement has an organizational purpose
  3. Employee engagement suggests absorption, dedication, passion, enthusiasm, focused effort and energy on the part of the employee.

The authors continue to clarify the notion that employee engagement is different than employee satisfaction. Employee satisfaction is more about satiation; that is, employee satisfaction is about the employees’ evaluation of different parts of their work environment, something external. Either the work environment has certain characteristics, or it does not. On the other hand, engagement connotes activation on the part of the employee, the willingness to expend his or her’s discretionary effort to help the employer. The measurement of employee engagement needs to extend beyond the work environment and focus on something about the employee, something internal.

The Three Faces of Employee Engagement

The authors, to bring employee engagement into the measurable world, conceptualize the area of employee engagement as three distinct things.

  • Disposition or Trait Engagement: This type of engagement reflects peoples’ predisposition  to experience the world from the perspective of enthusiasm and positive affectivity. Some people just have a positive outlook on life.  This type of engagement suggests that certain people will naturally be predisposed to being engaged employees because that is how they approach everything in their lives.
  • State Engagement: This type of engagement is psychological in nature and reflects internal feelings of energy and absorption. State engagement is impacted (directly and indirectly) by trait engagement, and different aspects of the work environment (e.g., job variety, autonomy, senior leadership and other HR practices).
  • Behavioral Engagement: This type of engagement is represented in terms of discretionary effort on behalf of the employee (employees who consistently go above and beyond what is expected of them) to help the employer succeed.
The concept of employee engagement, then, includes three distinct, but related concepts. As you will see below, I will focus on employee engagement measures that assess state employee engagement.

Evaluating Employee Engagement Measures

I was able to find four measures of employee engagement in a short Web search. While these four metrics are not meant to be an exhaustive list of employee engagement measures, understanding the review process can help you evaluate your own employee engagement measures. I will evaluate each employee engagement metric using the four criteria I use when evaluating any metric derived from survey responses (see Four Things You Need To Know About Your Customer Metric): 1) definition of the metric, 2) how metric is calculated (including items and scoring method), 3) measurement properties of the metric (e.g., reliability and validity) and 4) usefulness of the metric (where is the business value?).

PeopleMetrics’ Employee Engagement Index (EEI)

  1. Definition: PeopleMetrics offers no clear definition of this metric.
  2. Calculation: No information is offered on the items or how they are aggregated to calculate the final score.
  3. Measurement Properties: No reliability evidence is provided. To support the validity, they do show the benefits of increased employee engagement; the EEI does predict important business outcomes.
  4. Usefulness: Even though the EEI does predict business outcomes, the use of the term “employee engagement” is confusing. Without knowing the specific questions (or even just a representative sample of them), we do not know what is being measured. While the researchers attribute “employee engagement” as the underlying cause for the differences found using their metric, could those differences be explained through an “employee satisfaction” model. It is difficult to know exactly what is being measured by this index.

Gallup’s Employee Engagement (EE)

  1. Definition: No definition of employee engagement is offered by Gallup.
  2. Calculation: This metric includes 12 questions  (they appear in their brochure and are referred to as 12 Elements of Engagement). They calculate an Engagement Ratio but never specify how this ratio is calculated (e.g., what are the cutoff points on the rating scale that divides respondents to Engaged, Not Engaged and Actively Engaged employees?).
  3. Measurement Properties: There is no evidence of reliability of their metric. They do provide evidence that their EE metric predicts useful business outcomes (e.g., higher profitability, lower turnover); but, upon inspection of the actual survey questions, their employee engagement measure is really a measure of employee satisfaction. The questions focus on the employee’s work environment (e.g., I have the materials I need to do my work right; My supervisor, or someone at work seems to care about me as a person; I have a best friend at work.).
  4. Usefulness: Even though the EE predicts business outcomes, the use of the term engagement to describe what is being measured is not warranted. The EE questions are simply descriptions of the work environment. They can be best described as employee satisfaction measures about different work areas.

Temkin Employee Engagement Index (TEEI)

  1. Definition: The Temkin Group offers no formal definition of this metric.
  2. Calculation: The TEEI is based on three questions: 1) I understand the overall mission of my company; 2) My company asks for my feedback and acts upon my input; 3) My company provides me with the training and the tools that I need to be successful. For each question, employees rate their level of agreement on a 1-7 scale. The overall metric is the sum across all three questions.
  3. Measurement Properties: There is no evidence of the reliability of their metric (does summing these three different questions make statistical sense?). They do offer some evidence of validity in that scores on the TEEI predict some business outcomes (e.g., higher employee loyalty, better customer experience).
  4. Usefulness: Similar to the EE above, the use of the term engagement to describe what this index measures is not warranted.  The TEEI’s three questions do not require the use of a new term, engagement, to describe what it measures. They are simply descriptions of the work environment or HR practices perceived by employees as facilitating their work. These items could be best described as employee satisfaction measures about these three work areas.

Schaufeli, Salanova et al.’s Utrecht Work Engagement Scale (UWES)

First, this scale assesses different components of employee engagement: 1) Vigor, 2) Dedication and 3) Absorption.

  1. Definition: The authors provide a straightforward definition for each of their metrics. The authors state, “Vigor is characterized by high levels of energy and mental resilience while working, the willingness to invest effort in one’s work, and persistence even in the face of difficulties. Dedication refers to being strongly involved in one’s work and experiencing a sense of significance, enthusiasm, inspiration, pride, and challenge. Finally, Absorption is characterized by being fully concentrated and happily engrossed in one’s work, whereby time passes quickly and one has difficulties with detaching oneself from work.”
  2. Calculation: The UWES has 17 questions (9 for the short form – UWES-9). The Vigor Scale has 6 (3) questions; the Dedication Scale has 5 (3) questions; the Absorption Scale has 6 (3) questions. For each questions, the employee is asked to indicate how frequently they felt this way at work on a 0 (Never) to 6 (Always / Every day) scale. A score for each of the three metrics is calculated as the average across their respective questions. An Overall Score for the entire UWES is calculated as the average rating across all 17 (9) questions.
  3. Measurement Properties: There ample evidence provided regarding the reliability and validity of this scale. Each scale has acceptable levels of measurement precision (they can detect small differences). They provide factor analytic results to show that their measure of employee engagement is different than employee burnout. Inspecting the survey questions, we see that the UWES reflects something about the employee’s internal state (state engagement) rather than his or her evaluation about their work (e.g., At work, I feel full of energy; I am enthusiastic about my job; I am immersed in my work).
  4. Usefulness: These author’s show that the UWES does predict service climate which, in turn, predicts employee performance and customer loyalty. Units with higher employee engagement had better outcomes (better service climate, better employee performance and higher customer loyalty) than units with lower employee engagement.

Summary

It appears that the concept of employee engagement suggests an underlying energetic/effort component felt on behalf of the employee that is favorable to the organization. Measures of employee engagement can include such feelings as  absorption, dedication, passion, enthusiasm, focused effort and energy on the part of the employee. Employee engagement can be conceptualized as either a trait, a state and a behavior.

The employee engagement measures reviewed here differ in their quality as true measures of employee engagement. Based on the survey questions of some of these metrics, they are really measures of employee satisfaction with different areas of the organization and not employee engagement. Some measures lack a clear definition of the metric and the authors do not present information needed to critically evaluate their measures (e.g., sample of items, reliability, validity). Of the employee engagement metrics reviewed here, the best measure of state employee engagement is the Ulrecht Work Engagement Scale. This UWES reliably measures three underlying components of employee engagement. Scores on the UWES measure the internal state of the employees, not their satisfaction with the working conditions.

Problems Remain

I have not seen any evidence that the use of employee engagement metrics provides additional value in understanding business growth beyond what we know using employee satisfaction metrics. Even though the UWES has been shown to be predictive of good business outcomes, I know of no evidence to show that it provides additional predictive power beyond what traditional employee surveys measure. To be of value to business, employee engagement measures need to tell us something more about the health of the employee relationship beyond what we already know through our traditional measures of employee satisfaction. Adding employee engagement questions to an already long employee survey could adversely impact response rates while providing little added (no) value.

Does the use of employee engagement metrics help us identify how to better allocate our resources to ensure long term business success? Until somebody shows me some convincing evidence that employee engagement measures provide value beyond what we know using traditional measures (e.g., employee satisfaction and employee loyalty), I will likely not use them in my practice.

Final Thoughts

The term, “employee engagement,” is used loosely and carelessly across the blogosphere. This lazy practice only slows down the progress of our collective knowledge of what is real and what is not. Fortunately, you can challenge what you are told. The next time you read something about employee engagement, insist on a definition of their metric and some sample items. Are these proclaimed employee engagement metrics measuring something entirely different than employee engagement? A cursory examination of the questions would be a good start.

This lack of clarity in thought and writing is not unique to the concept of employee engagement. I see loosey goosey uses of words throughout the field of customer experience management (CEM). Specifically, the term “customer engagement” also suffers from lack of clarity and precision. Some measures of customer engagement include questions that are traditionally labeled as customer loyalty questions. Until there is clarity in our understanding of what we mean when we say “customer engagement,” that term is meaningless to me. If the CEM field  is to advance as a profession, it needs to use more precise terms to describe the variables with which it works.

Take a look at this recent segment on The Colbert Report as he mocks some of the terms we use.

Source: Clarifying Employee Engagement: A Review of Four Employee Engagement Measures

May 25, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Productivity  Source

[ FEATURED COURSE]

CS229 – Machine Learning

image

This course provides a broad introduction to machine learning and statistical pattern recognition. … more

[ FEATURED READ]

The Black Swan: The Impact of the Highly Improbable

image

A black swan is an event, positive or negative, that is deemed improbable yet causes massive consequences. In this groundbreaking and prophetic book, Taleb shows in a playful way that Black Swan events explain almost eve… more

[ TIPS & TRICKS OF THE WEEK]

Data aids, not replace judgement
Data is a tool and means to help build a consensus to facilitate human decision-making but not replace it. Analysis converts data into information, information via context leads to insight. Insights lead to decision making which ultimately leads to outcomes that brings value. So, data is just the start, context and intuition plays a role.

[ DATA SCIENCE Q&A]

Q:Which kernels do you know? How to choose a kernel?
A: * Gaussian kernel
* Linear kernel
* Polynomial kernel
* Laplace kernel
* Esoteric kernels: string kernels, chi-square kernels
* If number of features is large (relative to number of observations): SVM with linear kernel ; e.g. text classification with lots of words, small training example
* If number of features is small, number of observations is intermediate: Gaussian kernel
* If number of features is small, number of observations is small: linear kernel

Source

[ VIDEO OF THE WEEK]

Rethinking classical approaches to analysis and predictive modeling

 Rethinking classical approaches to analysis and predictive modeling

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

With data collection, ‘the sooner the better’ is always the best answer. – Marissa Mayer

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with Juan Gorricho, @disney

 #BigData @AnalyticsWeek #FutureOfData #Podcast with Juan Gorricho, @disney

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

The largest AT&T database boasts titles including the largest volume of data in one unique database (312 terabytes) and the second largest number of rows in a unique database (1.9 trillion), which comprises AT&T’s extensive calling records.

Sourced from: Analytics.CLUB #WEB Newsletter

Tips To Hunt For That Great Travel Deal [video]

Tips To Hunt For That Great Travel Deal Have you ever found yourself in a flux chasing after websites, agents, travel blogs, coupons to find your great travel deal? I am no different and spend good chunk of hours on travel deal hunting. I came across this amazing video by Jason Cochran on WalletPop.com, Jason walks us through easy to follow steps, helping us get to that great travel deal faster. Hope these tips will help you get to your travel-deal faster as well. These are great suggestions, I have been using few of them myself, and they work great.

Let me know if there are any other tips/tricks that you use and are not covered in the video below.

Originally Posted at: Tips To Hunt For That Great Travel Deal

A CS Degree for Data Science — Part I, Efficient Numerical Computation

dataAt The Data Incubator, we get tons of interest from PhDs looking to attend our free fellowship, which trains PhDs to join industry as quants and data scientists.  A lot of them have asked what they can do to make themselves stronger candidates.  One of the critical skills for being a data scientist is understanding computation and algorithms.  Below is a (cursory) guide meant to wet your appetite for the computational techniques that I found useful as a data scientist.  Remember, there’s a lifetime of things to learn and these are a just some highlights:

  1. Vectorized Linear Algebra.  Let’s say you want to dot product two very large vectors.  You could use a for loop, but that’s slow.  Instead, consider use vectorized linear algebra that calls out to professional numerical libraries like BLAS or LAPACK:
    import numpy as np
    x = np.random.randn(1000000)
    y = np.random.randn(1000000)
    
    z1 = [x[i] * y[i] for i in xrange(len(x))]  # Seconds elapsed: 0.60205
    z2 = np.dot(x, y)  # Seconds elapsed: 0.001251
    

    Run both samples and see which one takes longer (try using python’s timeit module if you are not already familiar with it). In our example, for loops are 600 times slower.  You can learn more about numerical computation from the numpy and scipy websites.

  2. Simulation.  The simplest workhorse when trying to get a handle on a complex system is simulation.  For example, consider the problem:

    You flip a fair coin n times.  What is the expected number of heads.  What is the standard deviation of the number of heads.

    Obviously, this is just a Bernoulli Distribution and does not require simulation but we will use it to didactically illustrate what one might do in more complex situations.  The simulation code might be:

    np.random.seed(42)
    samples = np.random.randint(0,2,10000)
    print samples.mean() # 0.4987
    print samples.std() # 0.499998309997
    

    Notice how much faster vectorized linear algebra is compared with running python for loops.  Obviously, the field can get very complex, with Dynamical Systems, Monte Carlo, Gibbs Sampling, Importance Sampling, and MCMC.  Don’t forget to use bootstrapping to estimate error bars.

  3. Recursion.  Of course, simulations can never give you an exact answer.  One technique to get an exact answer that works in many cases is recursion.  The simplest example of recursion comes from implementing the Fibonacci sequence:
    def fib(n):
      if n < 2:
        return 1
      else:
        return fib(n-1) + fib(n-2)
    

    Try timing the runs to guess the running time of the Fibonacci sequence (spoiler alert: it’s exponential).  You may be surprised by how slow it is (can you guess why?).  To see how we might use this to solve the last problem, notice that on the n-th draw, we can either increase or decrease the average number of heads by 1/n from the n-th draw, and that each occurs with probability 1/2. Here is the recursive code:

    def average_heads(n):
      if n == 1:
        return 0.5
      else:
        return np.mean([average_heads(n-1) + 1./n, average_heads(n-1) - 1./n])
    

    Think a little about how you might compute the standard deviation using this technique (Hint: it may help to review alternative formulas for variance).  Another popular use of recursion is in graph traversal algorithms.  Consider the question

    How many possible ways are there to draw US coins that sum up to 50 cents?

    For the sake of definiteness, we will say that order of the drawn coins matters.  You can solve this problem by traversing the “graph” of all possible draws until we reach exactly 50 cents:

    coins = [1, 5, 10, 25, 50]
    def count(remainder):
      if remainder < 0:
        return 0
      if remainder == 0:
        return 1
      return sum(count(remainder - coin) for coin in coins)
    count(50)

    This is just the tip of the iceberg of what you can do with recursion.  If you are interested, try looking up algorithms like Depth-First Search, Breadth-First Search, or tail recursion.

  4. Memoization and Dynamic Programming.  Notice that in both the above examples, we make multiple calls to the recursive function for the same input, which seems inefficient.  A common way to speed this up is to remember (or memoize) the results of previous computations.  As a simple example, take a look at thepython memoized class which uses python decorator syntax. Here it is in action:
    @memoized
    def fib(n):
      if n < 2:
        return 1
      else:
        return fib(n-1) + fib(n-2)
    

    This has now effectively turned a recursive program into one using Dynamic Programming.  Try using timing to guess the running time of the Fibonacci sequence (spoiler alert: it’s linear).  It’s amazing how much of a difference a single line makes!

  5. Divide and Conquer.  Another common approach is to break up a large problem into smaller subproblems.  A classic example of this is Merge Sort.  For the Fibonacci sequence, consider using the Matrix Form of the Fibonacci sequence:
    M = np.matrix([[1, 1], [1, 0]])
    
    def fib(n):
      if n < 2:
        return 1
      MProd = M.copy()
      for _ in xrange(n-2):
        MProd *= M
      return MProd[0,0] + MProd[0,1]
    

    Since the code relies on repeated matrix multiplication, it is very amenable to Divide and Conquertechniques (hint: M8=(((M2)2)2) ). We’ll let you write down the algorithm and time it to verify that it is a log algorithm.  (Isn’t it amazing that this can be done in sub-linear time!)

  6. Coda: This is it for this tutorial but if you know a little bit about matrix factorization, try working out a (pseudo)-constant time answer.  Why isn’t this really constant time?  (Spoiler alert: read a little bit about Arbitrary Precision Arithmetic.)  We’re not going to emphasize it because the technique isn’t really all that generalizable but it is still fun to think about.

Of course, this is just a very high-level overview that is meant to pique your interest rather than give you a full exposition.  If you find this kind of stuff interesting, consider applying to be a fellow at The Data Incubator!

This article appearing in The Data Incubator on January 7, 2015. 

Source

May 18, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Insights  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Challenges for Data Driven Organization by d3eksha

>> The Business of Data by anum

>> Four Ways Big Data Can Improve Customer Surveys by bobehayes

Wanna write? Click Here

[ NEWS BYTES]

>>
 Exxon Mobil: Lies, Damned Lies, And Statistics – Seeking Alpha – Seeking Alpha Under  Statistics

>>
 Free checklist shows how recruiters and HR teams can integrate online assessments with their ATS/HRIS – Onrec Under  Talent Analytics

>>
 Understanding Comcast’s “Internet of Things” Story in 11 Slides – Motley Fool Under  Internet Of Things

More NEWS ? Click Here

[ FEATURED COURSE]

Intro to Machine Learning

image

Machine Learning is a first-class ticket to the most exciting careers in data analysis today. As data sources proliferate along with the computing power to process them, going straight to the data is one of the most stra… more

[ FEATURED READ]

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World

image

In the world’s top research labs and universities, the race is on to invent the ultimate learning algorithm: one capable of discovering any knowledge from data, and doing anything we want, before we even ask. In The Mast… more

[ TIPS & TRICKS OF THE WEEK]

Winter is coming, warm your Analytics Club
Yes and yes! As we are heading into winter what better way but to talk about our increasing dependence on data analytics to help with our decision making. Data and analytics driven decision making is rapidly sneaking its way into our core corporate DNA and we are not churning practice ground to test those models fast enough. Such snugly looking models have hidden nails which could induce unchartered pain if go unchecked. This is the right time to start thinking about putting Analytics Club[Data Analytics CoE] in your work place to help Lab out the best practices and provide test environment for those models.

[ DATA SCIENCE Q&A]

Q:How do you assess the statistical significance of an insight?
A: * is this insight just observed by chance or is it a real insight?
Statistical significance can be accessed using hypothesis testing:
– Stating a null hypothesis which is usually the opposite of what we wish to test (classifiers A and B perform equivalently, Treatment A is equal of treatment B)
– Then, we choose a suitable statistical test and statistics used to reject the null hypothesis
– Also, we choose a critical region for the statistics to lie in that is extreme enough for the null hypothesis to be rejected (p-value)
– We calculate the observed test statistics from the data and check whether it lies in the critical region

Common tests:
– One sample Z test
– Two-sample Z test
– One sample t-test
– paired t-test
– Two sample pooled equal variances t-test
– Two sample unpooled unequal variances t-test and unequal sample sizes (Welch’s t-test)
– Chi-squared test for variances
– Chi-squared test for goodness of fit
– Anova (for instance: are the two regression models equals? F-test)
– Regression F-test (i.e: is at least one of the predictor useful in predicting the response?)

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with Scott Zoldi, @fico

 #BigData @AnalyticsWeek #FutureOfData #Podcast with Scott Zoldi, @fico

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

We chose it because we deal with huge amounts of data. Besides, it sounds really cool. – Larry Page

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with Eloy Sasot, News Corp

 #BigData @AnalyticsWeek #FutureOfData #Podcast with Eloy Sasot, News Corp

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Big data is a top business priority and drives enormous opportunity for business improvement. Wikibon’s own study projects that big data will be a $50 billion business by 2017.

Sourced from: Analytics.CLUB #WEB Newsletter

Why Cloud and streaming will save the music business

Analysis: Despite rejection by acts including Taylor Swift the streaming model could be set to take off.

You’ve heard the story before: a music industry that couldn’t adapt quickly enough to digital and is paying the price.

Record labels report sluggish, declining revenues as artists struggle to make a decent buck from their hard work and become disillusioned with the process.

Nobody’s quite sure where to cast the blame. Lars Ulrich of thrash metal band Metallica notably attracted criticism for blaming the downloaders, while Thom Yorke of Radiohead blames the industry itself. Perhaps it’s time to end the blame game and start looking at where the solution is going to come from.

Since its peak in the late 1990s when the industry rode high on a cresting wave of CD sales, the industry has taken its time to realise more significant revenue streams from digital channels. According to the International Federation for the Phonographic Industry (IFPI), 2014 was the first year to see the industry deriving the same proportion of revenues from digital channels as physical format sales, at 46 percent.

Breaking down this digital chunk further reveals that revenues from downloads globally fell by 8 percent in 2014. It is streaming services where the music industry is seeing its biggest and most sustained growth. The IFPI estimates 41 million people paid for music subscription services in 2014, a fivefold increase since 2010. In addition, revenues had grown by 39 percent in 2014 and grew consistently across major markets.

Artists are divided on streaming.

Certainly Spotify in particular has attracted criticism for not paying artists enough. The business-minded Taylor Swift removed her music from Spotify, anticipating (correctly) that she would be able to sell huge numbers of her most recent album 1989. On the other hand, English singer-songwriter Ed Sheeran credited the streaming service with allowing him to sell out Wembley Stadium.

However the artists feel, the market is growing. Working behind the scenes, but well placed to benefit from the growing streaming market as much as any stakeholder, are B2B cloud-based music providers such as Omnifone. According to founder and Chief Engineer Phil Sant, streaming will see the industry getting back to health within a few years.

“The music industry sort of stumbled into digital and added digital on the side. It’s taken quite a while for it to recognise that it’s a digital business. We recognised that the music industry was going to turn digital eventually, and it’s actually only really pivoting now…it will end up many times bigger than it was what the peak of CDs.”

In fact, Sant argues that it is the immaturity of the streaming market that has held back revenues and hence royalties, which has been a primary bugbear of critics such as Beck.

“There are seven billion people on the earth. There are currently 30 million, growing quickly to 40 million, music subscribers. 7 billion minus 35 million is still 7 billion. It’s really in its infancy still.

“Imagine what’s going to happen when there are 1 billion subscribers, which is where it will get. There will be more money available to everybody – all the rights holders, all the authors and all the musicians.”

Omnifone works by collecting content from record labels, including recordings and accompanying materials. They then host this on Amazon Web Services.

“What we ingest from labels are their high-resolution assets – the highest possible quality they’ve got,” says Sant. “A studio quality master is about 300 MB, whereas if we delivered that to a tiny mobile phone in India it would be about 600k. We ingest that, the associated artwork, the meta-data and the usage data.

Quality, more than in many other industries, is key to music. Omnifone employs an expert audio engineer and ‘golden ears’, trained by James Guthrie, the man responsible for mastering and producing Pink Floyd’s ‘Dark Side of the Moon’. Alongside Spotify, Omnifone hosts Neil Young’s music service Pono. Carried on an idiosyncratic pyramid-shaped device, the service uses only high quality recordings. The ‘Heart of Gold’ hitmaker created Pono to tackle what he sees as the poor quality of MP3s and iTunes files.

“Although we’ve got the highest resolution available here, in the early days we were squeezing tracks down to the smallest format possible for tiny little feature phones,” Sant continues. “We found when dealing with thousands of labels that they all had different approaches to compression.

“We couldn’t give the users that. [The audio engineer] convinced me to go to the labels and convince them of the security, and we took lossless studio quality from them from day one. We have the biggest collection of 41 million lossless tracks in the world.”

The availability of such a large collection means that it’s not difficult for new players to launch into the market if they have a unique proposition. Omnifone removes the need for artists to collect their own music database, meaning that competition in this burgeoning market will remain healthy.

As adoption of subscription services increases, we should expect musicians and the industry to start taking it more seriously as a channel. This will mean better service, better revenues and ultimately, perhaps, better music.

 

Source: Why Cloud and streaming will save the music business by anum

May 11, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data interpretation  Source

[ NEWS BYTES]

>>
 Certificate Course in Business Analytics – Mathrubhumi English Under  Business Analytics

>>
 Connecting Unemployed Youth with Organizations That Need Talent – Harvard Business Review Under  Talent Analytics

>>
 How big data delivers data driven stories – ITProPortal Under  Big Data

More NEWS ? Click Here

[ FEATURED COURSE]

Lean Analytics Workshop – Alistair Croll and Ben Yoskovitz

image

Use data to build a better startup faster in partnership with Geckoboard… more

[ FEATURED READ]

Big Data: A Revolution That Will Transform How We Live, Work, and Think

image

“Illuminating and very timely . . . a fascinating — and sometimes alarming — survey of big data’s growing effect on just about everything: business, government, science and medicine, privacy, and even on the way we think… more

[ TIPS & TRICKS OF THE WEEK]

Data Analytics Success Starts with Empowerment
Being Data Driven is not as much of a tech challenge as it is an adoption challenge. Adoption has it’s root in cultural DNA of any organization. Great data driven organizations rungs the data driven culture into the corporate DNA. A culture of connection, interactions, sharing and collaboration is what it takes to be data driven. Its about being empowered more than its about being educated.

[ DATA SCIENCE Q&A]

Q:How frequently an algorithm must be updated?
A: You want to update an algorithm when:
– You want the model to evolve as data streams through infrastructure
– The underlying data source is changing
– Example: a retail store model that remains accurate as the business grows
– Dealing with non-stationarity

Some options:
– Incremental algorithms: the model is updated every time it sees a new training example
Note: simple, you always have an up-to-date model but you can’t incorporate data to different degrees.
Sometimes mandatory: when data must be discarded once seen (privacy)
– Periodic re-training in “batch” mode: simply buffer the relevant data and update the model every-so-often
Note: more decisions and more complex implementations

How frequently?
– Is the sacrifice worth it?
– Data horizon: how quickly do you need the most recent training example to be part of your model?
– Data obsolescence: how long does it take before data is irrelevant to the model? Are some older instances
more relevant than the newer ones?
Economics: generally, newer instances are more relevant than older ones. However, data from the same month, quarter or year of the last year can be more relevant than the same periods of the current year. In a recession period: data from previous recessions can be more relevant than newer data from different economic cycles.

Source

[ VIDEO OF THE WEEK]

@AnalyticsWeek Panel Discussion: Finance and Insurance Analytics

 @AnalyticsWeek Panel Discussion: Finance and Insurance Analytics

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

The goal is to turn data into information, and information into insight. – Carly Fiorina

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with David Rose, @DittoLabs

 #BigData @AnalyticsWeek #FutureOfData #Podcast with David Rose, @DittoLabs

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

This year, over 1.4 billion smart phones will be shipped – all packed with sensors capable of collecting all kinds of data, not to mention the data the users create themselves.

Sourced from: Analytics.CLUB #WEB Newsletter

Data Science Falls Into Many Roles

Data science continues to grow in significance in industry, particularly in industries like software, IT consulting and finance. Last year I shared results from O’Reilly Media’s annual salary survey in this field in Revealing Data Science’s Job Potential. They have just recently released results for their third annual Data Science Salary survey and here are some of their findings.

Over 600 people completed the survey when the questions were opened to anyone, of which the majority (67%) was from the U.S. The data allows a closer look by U.S. regions, particularly California and the Northeast. Additionally 25% were from the Software industry, followed by 10% each for Consulting and for Finance—you can see the salary range breakdown by the industry in Fig 1.

Fig 1. Data Scientist Salary Range by Industry 

Per their report, “Despite the fact that this is a ‘data science’ survey, only one-quarter of the respondents have job titles that explicitly identify them as ‘data scientists.’” Some roles like Team TISI +% Lead, Manager and Upper Management mask this aspect, but in general the actual role of people who work on analytics is widely spread across job titles. Even with similar jobs the salary range reflects a difference by the title one has (se Fig 2).

Roughly said per these results, the data scientists most commonly work 40 -45 hours a week, are 26 – 35 years old, average over $91,000 a year, largely male, spend 1-4 hours a week in meetings, use primarily Windows or Linux, and have skills in SQL, Excel, Python or R development languages and platforms.

Fig 2. Data Scientist Salary Range by Job Title

In corollary, this past April, I had met up with Meredith Amdur of Wanted Analytics from Quebec City, Canada at HR TechLondon to talk about deeper analytics in finding people with such particular skills. Wanted Analytics business takes data from multiple job search sites and can analyze job types by geographies, skills, and pay levels as currently advertised in the market. I expect such more real time data analysis of job postings gives an alternative point-in-time view of how people are hiring. I expect I will find more job search companies presenting such data analytics at the next HR Tech World Congress at the end of this October.

For original post click HERE.

Source by analyticsweekpick