Are You Asking the Right Predictive Questions?

Predictive analytics works by learning the patterns that exist in your historical data, then using those patterns to predict future outcomes. For example, if you need to predict if a customer will pay late, you’ll feed data samples from customers who paid on time and data from those who have paid late into your predictive analytics algorithm.

>> Related: Predictive Analytics 101 <<

The process of feeding in historical data for different outcomes and enabling the algorithm to learn how to predict is called the training process. Once your algorithm determines a pattern, you pass on information about a new customer and it will make a prediction. But the first step is deciding what predictive questions you want to answer.

predictive

How do you know which predictive questions to ask?

When determining a predictive question, the rule of thumb is to base it on what you want to do with the answer.Following that logic, if we want to predict the number of late payments in a certain time frame—instead of if a particular person will pay late (as in the above example)—our predictive question should be: “How many customers will make late payments next month?”

Let’s look at a slightly more complex example. If we’re forecasting volume for a call center, our predictive question might be: “How many calls will I get tomorrow?” That is a forecasting/regression question (like the one in the example above). However, we could also ask a binary question such as: “Will I get more than 200 calls tomorrow?” That is a classification question because the answer will either be yes or no.

The predictive question you should ask will depend on what you are going to do with the information. If you have the staff to handle 200 calls, then you will likely want to know if you’ll get 200 calls or not (so you’d ask the classification question). But if your goal is to identify how many calls you are going to get tomorrow so that you can staff accordingly, you would ask the forecasting question.

Let’s apply this rule to a different industry. If you’re in sales and your monthly goal is 250 sales referrals, you would ask a classification question such as: “Will I get 250 referrals or more next month?” But if you simply want to know your expected referral volume, without taking into consideration any monthly goals, then you’d ask the forecasting/regression question: “How many sales referrals will I get in the next month?”

Over time, you’ll be able to run multiple algorithms to pick the one that works best with your data, or even use an ensemble of algorithms. You’ll also want to regularly retrain your learning model to keep up with fluctuations in your data based on based on the time of year, what activities your business has underway, and other factors. Set a timeline—maybe once a month or once a quarter—to regularly retrain your predictive analytics learning module to update the information.

To learn more about how predictive analytics can work for you, sign up for a free demo of Logi Predict >

 

Originally Posted at: Are You Asking the Right Predictive Questions?

Dickson Tang (@imDicksonT) on Building a Career Edge over Robots using #3iFramework #JobsOfFuture #Podcast

 

In this podcast Dickson Tang shared his perspective on building a future and open mindset organization by working on it’s 3 Is: Individual, Infrastructure and Ideas. He shared his perspective on various organization types and individuals who could benefit from this 3iFramework, elaborated in details in his book: “Leadership for future of work: ways to build career edge over robots with human creativity book”. This podcast is great for anyone seeking to learn about ways to be open, innovative and change agent within an organization.

Dickson’s Book:

Leadership for future of work: 9 ways to build career edge over robots with human creativity by Dickson Tang amzn.to/2McxeIS

Dickson’s Recommended Read:
The Creative Economy: How People Make Money From Ideas by John Howkins amzn.to/2MdLotA

Podcast Link:
iTunes: math.im/jofitunes
Youtube: math.im/jofyoutube

Dickson’s BIO:
Dickson Tang is the author of Leadership for future of work: ways to build career edge over robots with human creativity book. He helps senior leaders (CEO, MD and HR) build creative and effective teams in preparation for the future / robot economy. Dickson is a leadership ideas expert, focusing on how leadership will evolve in the future of work. 15+ years of experience in management, business consulting, marketing, organizational strategies and training & development. Corporate experience with several leading companies such as KPMG Advisory, Gartner and Netscape Inc.

Dickson’s expertise on leadership, creativity and future of work have earned him invitations and opportunities to work with leaders and professionals from various organizations such as Cartier, CITIC Telecom, DHL, Exterran, Hypertherm, JVC Kenwood, Mannheim Business School, Montblanc and others.

He lives in Singapore, Asia.
LinkedIN: www.linkedin.com/in/imDicksonT
Twitter: www.twitter.com/imDicksonT
Facebook: www.facebook.com/imDicksonT
Youtube: www.youtube.com/channel/UC2b4BUeMnPP0fAzGLyEOuxQ

About #Podcast:
#JobsOfFuture is created to spark the conversation around the future of work, worker and workplace. This podcast invite movers and shakers in the industry who are shaping or helping us understand the transformation in work.

Wanna Join?
If you or any you know wants to join in,
Register your interest @ analyticsweek.com/

Want to sponsor?
Email us @ info@analyticsweek.com

Keywords:
#JobsOfFuture #FutureOfWork #FutureOfWorker #FutuerOfWorkplace #Work #Worker #Workplace

Source: Dickson Tang (@imDicksonT) on Building a Career Edge over Robots using #3iFramework #JobsOfFuture #Podcast

The Cost Of Too Much Data

Global Data
I came across this interesting infographics on “The Cost Of Too Much Data” from Lattice. It throws some light on lost dollars from lack of big-data initiative. Elaborating on the spread of data generation source and how productivity and dollars are lost from the lack of bigdata implemetation.

The Cost Of Too Much Data Infographic

Like this infographic? Get more sales and marketing information here: http://www.lattice-engines.com/resource-center/knowledge-hub

Originally Posted at: The Cost Of Too Much Data by v1shal

Democratizing Self-Service Cognitive Computing Analytics with Machine Learning

There are few areas of the current data landscape that the self-service movement has not altered and positioned firmly within the grasp of the enterprise and its myriad users, from novices to the most accomplished IT personnel.

One can argue that cognitive computing and its self-service analytics have always been a forerunner of this effort, as their capability of integrating and analyzing disparate sources of big data to deliver rapid results with explanations and recommendations proves.

Historically, machine learning and its penchant for predictive analytics has functioned as the most accessible of cognitive computing technologies that include natural language processing, neural networks, semantic modeling and vocabularies, and other aspects of artificial intelligence. According to indico co-founder and CEO Slater Victoroff, however, the crux of machine learning’s utility might actually revolve around deep learning and, specifically, transfer learning.

By accessing these technologies at scale via the cloud, enterprises can now deploy cognitive computing analytics on sets of big data without data scientists and the inordinate volumes of data required to develop the models and algorithms that function at the core of machine learning.

From Machine Learning to Deep Learning
The cost, scale, and agility advantages of the cloud have resulted in numerous Machine Learning-as-a-Service vendors, some of which substantially enhance enterprise utility with Deep Learning-as-a-Service. Machine learning is widely conceived of as a subset of predictive analytics in which existing models of algorithms are informed by the results of previous ones, so that future models are formed quicker to tailor analytics according to use case or data type. According to Slater, deep learning algorithms and models “result in better accuracies for a wide variety of analytical tasks.” Largely considered a subset of machine learning, deep learning is understood as a more mature form of the former. That difference is conceptualized in multiple ways, including “instead of trying to handcraft specific rules to solve a given problem (relying on expert knowledge), you let the computer solve it (deep learning approach),” Slater mentioned.

Transfer Learning and Scalable Advantages
The parallel is completed with an analogy of machine learning likened to an infant and deep learning likened to a child. Whereas an infant must be taught everything, “a child has automatically learnt some approximate notions of what things are, and if you can build on these, you can get to higher level concepts much more efficiently,” Slater commented. “This is the deep learning approach.” That distinction in efficiency is critical in terms of scale and data science requirements, as there is a “100 to 100,000 ratio” according to Slater on the amounts of data required to form the aforementioned “concepts” (modeling and algorithm principles to solve business problems) with a deep learning approach versus a machine learning one. That difference is accounted for by transfer learning, a subset of deep learning that “lets you leverage generalized concepts of knowledge when solving new problems, so you don’t have to start from scratch,” Slater revealed. “This means that your training data sets can be one, two or even three orders of magnitude smaller in size and this makes a big difference in practical terms.”

Image and Textual Analytics on “Messy” Unstructured Data
Those practical terms expressly denote the difference between staffing multiple data scientists to formulate algorithms on exorbitant sets of big data, versus leveraging a library of preset models of service providers tailored to vertical industries and use cases. These models are also readily modified by competent developers. Providers such as indico offer these solutions for companies tasked with analyzing the most challenging “messy data sets”, as characterized by Slater. In fact, the vast forms of unstructured text and image analytics required of unstructured data is ideal for deep learning and transfer learning. “Messy data, by nature, is harder to cope with using handcrafted rules,” Slater observed. “In the case of images things like image quality, lighting conditions, etc. introduce noise. Sarcasm, double negatives, and slang are examples of noise in the text domain. Deep learning allows us to effectively work with real world noisy data and still extract meaningful signal.”

The foregoing library of models utilizing this technology can derive insight from an assortment of textual and image data including characteristics of personality, emotions, various languages, content filtering, and many more. These cognitive computing analytic capabilities are primed for social media monitoring and sentiment analysis in particular for verticals such as finance, marketing, public relations, and others.

Sentiment Analysis and Natural Language Processing
The difference with a deep learning approach is both in the rapidity and the granular nature of the analytics performed. Conventional natural language processing tools are adept at identifying specific words and spellings, and at determining their meaning in relation to additional vocabularies and taxonomies. NLP informed by deep learning can expand this utility to include entire phrases and a plethora of subtleties such as humor, sarcasm, irony and meaning that is implicit to native speakers of a particular language. Such accuracy is pivotal to gauging sentiment analysis.

Additionally, the necessity of image analysis as part of sentiment analysis and other forms of big data analytics is only increasing. Slater characterized this propensity of deep learning in terms of popular social media platforms such as Twitter, in which images are frequently incorporated. Image analysis can detect when someone is holding up a “guitar, and writes by it ‘oh, wow’,” Slater said. Without that image analysis, organizations lose the context of the text and the meaning of the entire post. Moreover, image analysis technologies can also discern meaning in various facial expressions, gestures, and other aspects of text that yield insight.

Cognitive Computing Analytics for All
The provisioning of cognitive computing analytics via MLaaS and DLaaS illustrates once again exactly how pervasive the self-service movement is. It also demonstrates the democratization of analytics and the fact that with contemporary technology, data scientists and massive sets of big data (augmented by expensive physical infrastructure) are not required to reap the benefits of some of the fundamental principles of cognitive computing and other applications of semantic technologies. Those technologies and their applications, in turn, are responsible for increasing the very power of analytics and of data-driven processes themselves.

In fact, according to Cambridge Semantics VP of Marketing John Rueter, many of the self-service facets of analytics that are powered by semantic technologies “are built for the way that we think and the way that we analyze information. Now, we’re no longer held hostage by the technology and by solving problems based upon a technological approach. We’re actually addressing problems with an approach that is more aligned with the way we think, process, and do analysis.”

Source

Hadoop demand falls as other big data tech rises

graph-36929_1280-100585494-primary.idge

Hadoop makes all the big data noise. Too bad it’s not also getting the big data deployments.

Indeed, though Hadoop has often served as shorthand for big data, this increasingly seems like a mistake. According to a new Gartner report, “despite continuing enthusiasm for the big data phenomenon, demand for Hadoop specifically is not accelerating.”

According to the survey, most enterprises have “no plans at this time” to invest in Hadoop and a mere 26 percent have either deployed or are piloting Hadoop. They are, however, actively embracing other big data technologies.

‘Fairly anemic’ interest in Hadoop

For a variety of reasons, with a lack of Hadoop skills as the biggest challenge (57 percent), enterprises aren’t falling in love with Hadoop.

Indeed, as Gartner analyst Merv Adrian suggests in a new Gartner report (“Survey Analysis: Hadoop Adoption Drivers and Challenges“):

With such large incidence of organizations with no plans or already on their Hadoop journey, future demand for Hadoop looks fairly anemic over at least the next 24 months. Moreover, the lack of near-term plans for Hadoop adoption suggest that, despite continuing enthusiasm for the big data phenomenon, demand for Hadoop specifically is not accelerating.

How anemic? Think 54 percent with zero plans to use Hadoop, plus another 20 percent that at best will get to experimenting with Hadoop in the next year:

Gartner HadoopGartner

Google’s Android for Work promises serious security, but how does it stack up against Apple’s iOS and

READ NOW

This doesn’t bode well for Hadoop’s biggest vendors. After all, as Gartner analyst Nick Huedecker posits, “Hadoop [is] overkill for the problems the business[es surveyed] face, implying the opportunity costs of implementing Hadoop [are] too high relative to the expected benefit.”

Selling the future of Hadoop

By some measures, this shortfall of interest hasn’t yet caught up with the top two Hadoop vendors, Cloudera and Hortonworks.

Cloudera, after all, will reportedly clear nearly $200 million in revenue in 2015, with a valuation of $5 billion, according to Manhattan Venture Partners. While the company is nowhere near profitability, it’s not struggling to grow and will roughly double revenue this year.

Hortonworks, for its part, just nailed a strong quarter. Annual billings grew 99 percent to $28.1 million, even as revenue exploded 167 percent to $22.8 million. To reach these numbers, Hortonworks added 105 new customers, up from 99 new customers in the previous quarter.

Still, there are signs that the hype is fading.

Hortonworks, despite beating analyst expectations handily last quarter, continues to fall short of the $1 billion-plus valuation it held at its last round of private funding. As I’ve argued, the company will struggle to justify a billion-dollar price tag due to its pure-play open source business model.

But according to the Gartner data, it may also struggle due to “fairly anemic” demand for Hadoop.

There’s a big mitigating factor. Hadoop vendors will almost surely languish — unless they’re willing to embrace adjacent big data technologies that complement Hadoop. As it happens, both leaders already have.

For example, even as Apache Spark has eaten into MapReduce interest, both companies have climbed aboard the Spark train.

But more is needed. Because big data is much more than Hadoop and its ecosystem.

For example, though the media has equated big data with Hadoop for years, data scientists have not. As Silicon Angle uncovered back in 2012 from its analysis of Twitter conversations, when data professionals talk about big data, they actually talked about NoSQL technologies like MongoDB as much or more than Hadoop:

MongoDB vs. HadoopSilicon Angle

Today those same data professionals are likely to be using MongoDB and Cassandra, both among the world’s top 10 most popular databases, rather than Hbase, which is the database of choice for Cloudera and Hortonworks, but ranks a distant #15 in terms of overall popularity, according to DB-Engines.

Buying an ecosystem

Let’s look at Gartner’s data again, this time comparing big data adoption and Hadoop adoption:

Gartner Hadoop vs. big dataGartner
A significant percentage of the delta between these two almost certainly derives from other, highly popular big data technologies such as MongoDB, Cassandra, Apache Storm, etc. They don’t fit into the current Hadoop ecosystem, but Cloudera and Hortonworks need to find ways to embrace them, or risk running out of Hadoop runway.

Nor is that the only risk.

As Aerospike executive and former Wall Street analystPeter Goldmacher told me, a major problem for Hortonworks and Cloudera is that both are spending too much money to court customers. (As strong as Hortonworks’ billings growth was, it doubled its loss on the way to that growth as it spent heavily to grow sales.)

While these companies currently have a lead in terms of distribution Goldmacher warns that Oracle or another incumbent could acquire one of them and thereby largely lobotomize the other because of its superior claim on CIO wallets and broad-based suite offerings.

Neither Cloudera nor Hortonworks can offer that suite.

But what they can do, Goldmacher goes on, is expand their own big data footprint. For example, if Cloudera were to use its $4-to-5 billion valuation to acquire a NoSQL vendor, “All of a sudden other NoSQL vendors and Hortonworks are screwed because Cloudera would have the makings of a complete architecture.”

In other words, to survive long term, Hadoop’s dominant vendors need to move beyond Hadoop — and fast.

Originally posted by Matt Asay at: http://www.infoworld.com/article/2922720/big-data/hadoop-demand-falls-as-other-big-data-tech-rises.html

Source: Hadoop demand falls as other big data tech rises

3 Big Data Stocks Worth Considering

Big data is a trend that I’ve followed for some time now, and even though it’s still in its early stages, I expect it to continue to be a game changer as we move further into the future.

smartphone tech sector news 3 Big Data Stocks Worth ConsideringAs our Internet footprint has grown, all the data we create — from credit cards to passwords and pictures uploaded on Instagram — has to be managed somehow.

This data is too vast to be entered into traditional relational databases, so more powerful tools are needed for companies to utilize the information to analyze customers’ behavior and predict what they may do in the future.

Big data makes it all possible, and as a result is one of the dominant themes for technology growth investing. We’ve invested in several of these types of companies in my GameChangers service over the years, one of which we’ll talk more about in just a moment.

First, let’s start with two of the biggest and best big data names out there. They’re among the best pure plays, and while I’m not sure the time is quite right to invest in either right now, they are both garnering some buzz in the tech world.

Big Data Stocks: Splunk (SPLK)

Splunk185 3 Big Data Stocks Worth ConsideringThe first is Splunk (SPLK). Splunk’s flagship product is Splunk Enterprise, which at its core is a proprietary machine data engine that enables dynamic creation on the fly. Users can then run queries on data without having to understand the structure of the information prior to collection and indexing.

Faster, streamlined processes mean more efficient (and more profitable) businesses.

While Splunk is very small in terms of revenues, with January 2015 fiscal year sales of just $451 million, it is growing rapidly, and I’m keeping an eye on the name as it may present a strong opportunity down the road.

However, I do not want to overpay for it. Splunk brings effective technology to the table that is gaining market acceptance, and has strong security software partners with its recent entry into security analytics. At the right price, the stock could also be a takeover candidate for a larger IT company looking to enhance its Big Data presence.

Big Data Stocks: Tableau Software (DATA)

TableauSoftware185 3 Big Data Stocks Worth ConsideringAnother name on my radar is Tableau Software (DATA), which performs similar functions as Splunk’s. Its primary product, VizQL, translates drag-and-drop actions into data queries. In this way, the company puts data directly in the hands of decision makers, without first having to go through technical specialists.

In fact, the company believes all employees, no matter what their rank in the company, can use their product, leading to the democratization of data.

DATA is also growing rapidly, even faster than Splunk. Revenues were up 78% in 2014, and 75% in the first quarter of 2015, including license revenue growth of more than 70%. That rate is expected to slow somewhat, with revenues for all of 2015 estimated to increase to a still strong 50%.

Tableau stock is also very expensive, trading at 12X expected 2015 revenues of $618 million and close to 300X projected EPS of 40 cents for the year. DATA is a little risky to buy at current levels, but it is a name to keep an eye on in any pullback.

Big Data Stocks: Red Hat (RHT)

red hat rht stock logo 185 3 Big Data Stocks Worth ConsideringThe company we made money on earlier this year in my GameChangers service isRed Hat (RHT). We booked a 15% profit in just a few months after it popped 11% on fourth-quarter earnings.

Red Hat is the world’s largest leading provider of open-source solutions, providing software to 90% of Fortune 500 companies. Some of RHT’s customers include well-known names like Sprint (S), Adobe Systems (ADBE) and Cigna Corporation (CI).

Management’s goal is to become the undisputed leader of enterprise cloud computing, and it sees its popular Linux operating system as a way to the top. If RHT is successful — as I expect it will be — Red Hat should have a lengthy period of expanded growth as corporations increasingly move into the cloud.

Red Hat’s operating results had always clearly demonstrated that its solutions are gaining greater acceptance in IT departments, as revenues had more doubled in the five years between 2009 and 2014 from $748 million to $1.53 billion. I had expected to see the strong sales growth continue throughout 2015, and it did. As I mentioned, impressive fiscal fourth-quarter results sent the shares 11% higher.

I recommended my subscribers sell their stake in the company at the end of March because I believed any further near-term upside was limited. Since then, shares have traded mostly between $75 and $80. It is now at the very top of that range and may be on the verge of breaking above it after the company reported fiscal first-quarter results last night.

Although orders were a little slow, RHT beat estimates on both the top and bottom lines in the first quarter. Earnings of 44 cents per share were up 29% quarter-over-quarter, besting estimates on the Street for earnings of 41 cents. Revenue climbed 14% to $481 million, while analysts had been expecting $472.6 million.

At this point, RHT is now back in uncharted territory, climbing to a new 52-week high earlier today. This is a company with plenty of growth opportunities ahead, and while growth may slow a bit in the near term following the stock’s impressive climb so far this year, RHT stands to gain as corporation continue to adopt additional cloud technologies.

To read the original article on InvestorPlace, click here.

Originally Posted at: 3 Big Data Stocks Worth Considering by analyticsweekpick

Evaluating Hospital Quality using Patient Experience, Health Outcomes and Process of Care Measures

Patient experience (PX) has become an important topic for US hospitals. The Centers for Medicare & Medicaid Services (CMS) will be using patient feedback about their care as part of their reimbursement plan for acute care hospitals (see Hospital Value-Based Purchasing Program). Not surprisingly, hospitals are focusing on improving the patient experience to ensure they receive the maximum of their incentive payments. Additionally, US hospitals track other types of metrics (e.g., process of care and mortality rates) as measures of quality of care.

Given that hospitals have a variety of metrics at their disposal, it would be interesting to understand how these different metrics are related with each other. Do hospitals that receive higher PX ratings (e.g., more satisfied patients) also have better scores on other metrics (lower mortality rates, better process of care measures) than hospitals with lower PX ratings? In this week’s post, I will use the following hospital quality metrics:

  1. Patient Experience
  2. Health Outcomes (mortality rates, re-admission rates)
  3. Process of Care

I will briefly cover each of these metrics below.

Table 1. Descriptive Statistics for PX, Health Outcomes and Process of Care Metrics for US Hospitals (acute care hospitals only)

1. Patient Experience

Patient experience (PX) reflects the patients’ perceptions about their recent inpatient experience. PX is collected by a survey known as HCAHPS (Hospital Consumer Assessment of Healthcare Providers and Systems). HCAHPS (pronounced “H-caps“) is a national, standardized survey of hospital patients and was developed by a partnership of public and private organizations and was created to publicly report the patient’s perspective of hospital care.

The survey asks a random sample of recently discharged patients about important aspects of their hospital experience. The data set includes patient survey results for over 3800 US hospitals on ten measures of patients’ perspectives of care (e.g., nurse communication, pain well controlled). I combined two general questions (Overall hospital rating and recommend) to create a patient advocacy metric. Thus, a total of 9 PX metrics were used. Across all 9 metrics, hospital scores can range from 0 (bad) to 100 (good). You can see the PX measures for different US hospital here.

2. Process of Care

Process of care measures show, in percentage form or as a rate, how often a health care provider gives recommended care; that is, the treatment known to give the best results for most patients with a particular condition. The process of care metric is based on medical information from patient records that reflects the rate or percentage across 12 procedures related to surgical care.  Some of these procedures are related to antibiotics being given/stopped at the right times and treatments to prevent blood clots.  These percentages were translated into scores that ranged from 0 (worse) to 100 (best).  Higher scores indicate that the hospital has a higher rate of following best practices in surgical care. Details of how these metrics were calculated appear below the map.

I calculated an overall Process of Care Metric by averaging each of the 12 process of care scores. The process of care metric was used because it has good measurement properties (internal consistency was .75) and, thus reflects a good overall measure of process of care. You can see the process of care measures for different US hospital here.

3. Health Outcomes

Measures that tell what happened after patients with certain conditions received hospital care are called “Outcome Measures.” We use two general types of outcome measures: 1) 30-day Mortality Rate and 2) 30-day Readmission Rate. The 30-day risk-standardized mortality and 30-day risk-standardized readmission measures for heart attack, heart failure, and pneumonia are produced from Medicare claims and enrollment data using sophisticated statistical modeling techniques that adjust for patient-level risk factors and account for the clustering of patients within hospitals.

The death rates focus on whether patients died within 30 days of their hospitalization. The readmission rates focus on whether patients were hospitalized again within 30 days.

Three mortality rate and readmission rate measures were included in the healthcare dataset for each hospital. These were:

  1. 30-Day Mortality Rate / Readmission Rate from Heart Attack
  2. 30-Day Mortality Rate / Readmission Rate from Heart Failure
  3. 30-Day Mortality Rate / Readmission Rate from Pneumonia

Mortality/Readmission rate is measured in units of 1000 patients. So, if a hospital has a heart attack mortality rate of 15, that means that for every 1000 heart attack patients, 15 of them die get readmitted. You can see the health outcome measures for different US hospital here.

Table 2. Correlations of PX metrics with Health Outcome and Process of Care Metrics for US Hospitals (acute care hospitals only).

Results

The three types of metrics (e.g., PX, Health Outcomes, Process of Care) were housed in separate databases on the data.medicare.gov site. As explained elsewhere in my post on Big Data, I linked these three data sets together by hospital name. Basically, I federated the necessary metrics from their respective database and combined them into a single data set.

Descriptive statistics for each variable are located in Table 1. The correlations of each of the PX measures with each of the Health Outcome and Process of Care Measures is located in Table 2. As you can see, the correlations of PX with other hospital metrics is very low, suggesting that PX measures are assessing something quite different than the Health Outcome Measures and Process of Care Measures.

Patient Loyalty and Health Outcomes and Process of Care

Patient loyalty/advocacy (as measured by the Patient Advocacy Index) is logically correlated with the other measures (except for Death Rate from Heart Failure). Hospitals that have higher patient loyalty ratings have lower death rates, readmission rates and higher levels of process of care. The degree of relationship, however, is quite small (the percent of variance explained by patient advocacy is only 3%).

Patient Experience and Health Outcomes and Process of Care

Patient experience (PX) shows a complex relationship with health outcome and process of care measures. It appears that hospitals that have higher PX ratings also report higher death rates. However, as expected, hospitals that have higher PX ratings report lower readmission rates. Although statistically significant, all of the correlations of PX metrics with other hospitals metrics are low.

The PX dimension that had the highest correlation with readmission rates and process of care measures was “Given Information about my Recovery upon discharge“.  Hospitals who received high scores on this dimensions also experienced lower readmission rates and higher process of care scores.

Summary

Hospitals are tracking different types of quality metrics, metrics being used to evaluate each hospital’s performance. Three different metrics for US hospitals were examined to understand how well they are related to each other (there are many other metrics on which hospitals can be compared). Results show that the patient experience and patient loyalty are only weakly related to other hospital metrics, suggesting that improving the patient experience will have little impact on other hospital measures (health outcomes, process of care).

 

Source: Evaluating Hospital Quality using Patient Experience, Health Outcomes and Process of Care Measures by bobehayes

The Modern Day Software Engineer: Less Coding And More Creating

Last week, I asked the CEO of a startup company in Toronto, “How do you define a software engineer?”.

She replied, “Someone who makes sh*t work”;

This used to be all you needed. If your online web app starts to crash, hire a software engineer to fix the problem.

If your app needs a new feature, hire a software engineer to build it (AKA weave together lines of code to make sh*t work).

We need to stop referring to an engineer as an ‘engineer’. CEOs of startups need to stop saying ‘we need more engineers’.

The modern day ‘engineer’ cannot simply be an engineer. They need to be a renaissance person; a person who is well versed in multiple aspects of life.

Your job as a software engineer cannot be to simply ‘write code’. That’s like saying a Canadian lawyer’s job is to speak English.

English and code are means of doing the real job: Produce value that society wants.

So, to start pumping out code to produce a new feature simply because it’s on the ‘new features list’ is mindless. You can’t treat code as a means itself.

The modern day engineer (MDE) needs to understand the modern day world. The MDE cannot simply sit in a room alone and write code.

The MDE needs to understand the social and business consequences of creating and releasing a product.

The MDE cannot leave it up to the CEOs and marketers and business buffs to come up with the ‘why’ for a new product.

Everyone should be involved in the ‘why’, as long they are in the ‘now’.

New frameworks that emphasis less code and more productivity are being released every day, almost.

We are slowly moving towards a future where writing code will be so easy that it would be unimpressive to be someone who only writes code.

In the future Google Translate will probably add JavaScript and Python (and other programming languages) to their list of languages. Now all you have to do is type in English and get a JavaScript translation. In fact, who needs a programming language like JavaScript or Python when you can now use English to directly tell a computer what to do?

Consequently, code becomes a language that can be spoken by all. So, to write good code, you need to be more than an ‘engineer’. You need to be a renaissance person and a person who understands the wishes, wants, emotions and needs of the modern day world.

Today (October 22nd, 2015), I was at a TD Canada Trust networking event designed for ‘tech professionals’ in Waterloo ON, Canada. The purpose of this event was to demo new ‘tech’ (the word has so many meanings nowadays) products to young students and professionals. The banking industry is in the process of a full makeover, if you didn’t know. One of the TD guys, let’s call him Julio, was telling me a little summary of what TD was (and is) trying to do with its recruitment process.

Let me give you the gist of what he said:

“We have business professionals (business analysts, etc) whose job is to understand the 5 W’s of the product. Also, we have engineers/developers/programmers who just write code. What we are now looking for is someone who can engage with others as well as do the technical stuff.”

His words were wise, but I was not sure if he fully understood the implications of what he was talking about. This is the direction we have been heading for quite some time now, but it’s about time we kick things up a notch.

Expect more of this to come.
Expect hybrid roles.
Expect it become easier and easier to write code.
Expect to be valued for your social awareness paired with your ability to make sh*t work.

Perhaps software tech is at the beginning of a new Renaissance era.

*View the original post here*

Twitter: @nikhil_says

Email: nikhil38@gmail.com

Source by nbhaskar

7 Things to Look before Picking Your Data Discovery Vendor

7 Things to Look before Picking Your Data Discovery Vendor
7 Things to Look before Picking Your Data Discovery Vendor

Data Discovery Tools: also called Data Visualization Tool, sometimes also referred to as Data Analytics tools. These tools are talk of the town and reasonably hot today. With all the hype about Big-data, companies are going dandy on planning for their big-data and are on a lookout for great tools to recruit in their big-data strategy. One thing to note here is that we don’t change our data discovery tool vendors every day. Once we get into our system, it will eventually become part of our Big-data DNA Solution. So, we should put much thought into what goes in picking data discovery/visualization/analysis tool.

So, what would you do and what would you consider important while picking up your data discovery tool vendor? I interviewed a couple of data scientists and data managers in a bunch of companies and prioritized the findings with top 7 things to consider before you go out picking your big-data discovery tools vendor.

Here are my 7 thoughts not particularly in that order:

1. Not a major jump with what I already have: Yes, learning a new system takes time, effort, resources and cycles. So, the faster the ramp up or shorter the learning curve, the better it will be. Sure, many tools will be eons apart with what you are used to, but that should not deter you from evaluating it as well. Just go a bit high on the tools with minimum learning curve. One thing to check here is that you should be able to do routine things with this new tool, almost the same way as you used to doing without it.

2. Helps me to do more with my data: There will be several moments where you will realize the tools could do a lot more than what you are used to or what you are capable of. This is another check to include in your equation. More feature set, capabilities within the discovery tool. The more it will let you do stuff with your data, the better it will be. You should be able to do more fun and investigative stuffs with your data more closely and at various dimensions that it will ultimately help with better understanding of the data.

3. Integrate well with my big-data: Yes, first things first, you need a tool that at least has the capability to talk to your data. It should be able to mingle well with your data layouts, structures without having to do too much time consuming steps. A good tool will always make it almost seamless to integrate your data. If you have to jump ropes, cut corners to make data integration happen, maybe you are looking at a wrong tool for help. So, get your data integration team to work and make sure data integration is a no issue with the tool that you are evaluating to buy.

4. Friendly with outside data I might include as well: Many-times, it is not only about your data. Sometime you need to access and evaluate external data and find their relationship with your data. Those used case must be checked as well. How easy it is to include external structured and unstructured data. The bigger the product integration roadmap for the vendor, the easier it will be for the tool to connect with other external resources. Your preferred tools should be able to integrate seamlessly with data sets involved in your industry. Social data, industry data, other third party application data are some example. So, ask your vendor on how their tools mingle well with other outside data sets.

5. Scalability of the platform: Sure, the tool you are evaluating could do wonders with data and has a sweet feature set, but will it scale well as you grow. This is important consideration just like any good corporate tool considerations. If your business will grow, so will it’s data and associated dependencies, but will your discovery tool grow with it? This is one finding which must be part of your evaluation score for any tool you are planning to recruit for your big-data discovery need. So, get on call with technical teams from vendor and grill them to understand how their tool will grow with growing data. You don’t want to partner with a tool that will break in future as your business grows.

6. Vendor’s vision is in-line with our vision: Above 5 measures are pretty much standard and defines basic functionalities on what a good tool should entail. It’s also not a big surprise that most of the tools will have some or the other of their own interpretation of the above 5 points. Now one key thing to notice on strategic front will be their vision for the company and the tool. Tool can do you good today, it has boatload of features, it is friendly with your and outside data. But will it grow with a strategy consistent with yours. Yes, no matter how weird it sounds, it is one of the realities that you should consider. A vendor only handling health care will have some impact to companies using the tools for insurance sector. A tool that will handle only clever visualization piece might have impact on companies expecting some automation as part of the core tool evolution. So, it is important to understand the product vision of the tool company, that will help you understand if it will comply with your business value tomorrow or day-after or in foreseeable future.

7. Awesome import / export tools to keep my data/analysis free: Another important thing to note is stickiness with the products. A good product design should not keep customer sticky by keeping their data hostage. A good tool should bank on it’s features, usability and data driven design. So, data and it’s knowledge should be easily importable/exportable to most common standards (csv, xml etc.). This will keep the tool up with integrating it with other third party service that might emerge with emerging market. This should be a consideration as it will play an instrumental role in moving your data around as you start dealing with new formats and new reporting tools that are leveraging your data discovery findings.

I am certain by the end of 7 steps you must have thought about several more examples that one could keep in mind before picking a good data discovery tool. Feel free to email me your findings and I will keep adding it to the list.

Source: 7 Things to Look before Picking Your Data Discovery Vendor by v1shal