Nov 29, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Pacman  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Data Scientists and the Practice of Data Science by bobehayes

>> Is Big Data The Most Hyped Technology Ever? by bobehayes

>> Apr 19, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..) by admin

Wanna write? Click Here

[ NEWS BYTES]

>>
 Diary Farmers of America invests in artificial intelligence – Fence Post Under  Artificial Intelligence

>>
 Global Big Data Security Market Global Market Demand, Growth, Opportunities, Top Key Players and Forecast to 2025 – Campus Telegraph Under  Big Data Security

>>
 Another View: Girls in STEM statistics are dismal, but here’s how we’re working to change that – Foster’s Daily Democrat Under  Statistics

More NEWS ? Click Here

[ FEATURED COURSE]

CPSC 540 Machine Learning

image

Machine learning (ML) is one of the fastest growing areas of science. It is largely responsible for the rise of giant data companies such as Google, and it has been central to the development of lucrative products, such … more

[ FEATURED READ]

How to Create a Mind: The Secret of Human Thought Revealed

image

Ray Kurzweil is arguably today’s most influential—and often controversial—futurist. In How to Create a Mind, Kurzweil presents a provocative exploration of the most important project in human-machine civilization—reverse… more

[ TIPS & TRICKS OF THE WEEK]

Analytics Strategy that is Startup Compliant
With right tools, capturing data is easy but not being able to handle data could lead to chaos. One of the most reliable startup strategy for adopting data analytics is TUM or The Ultimate Metric. This is the metric that matters the most to your startup. Some advantages of TUM: It answers the most important business question, it cleans up your goals, it inspires innovation and helps you understand the entire quantified business.

[ DATA SCIENCE Q&A]

Q:What does NLP stand for?
A: * Interaction with human (natural) and computers languages
* Involves natural language understanding

Major tasks:
– Machine translation
– Question answering: “what’s the capital of Canada?”
– Sentiment analysis: extract subjective information from a set of documents, identify trends or public opinions in the social media

– Information retrieval

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @DavidRose, @DittoLabs

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @DavidRose, @DittoLabs

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Numbers have an important story to tell. They rely on you to give them a voice. – Stephen Few

[ PODCAST OF THE WEEK]

@CRGutowski from @GE_Digital on Using #Analytics to #Transform Sales #FutureOfData #Podcast

 @CRGutowski from @GE_Digital on Using #Analytics to #Transform Sales #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Akamai analyzes 75 million events per day to better target advertisements.

Sourced from: Analytics.CLUB #WEB Newsletter

Dickson Tang (@imDicksonT) on Building a Career Edge over Robots using #3iFramework #JobsOfFuture #Podcast

 

In this podcast Dickson Tang shared his perspective on building a future and open mindset organization by working on it’s 3 Is: Individual, Infrastructure and Ideas. He shared his perspective on various organization types and individuals who could benefit from this 3iFramework, elaborated in details in his book: “Leadership for future of work: ways to build career edge over robots with human creativity book”. This podcast is great for anyone seeking to learn about ways to be open, innovative and change agent within an organization.

Dickson’s Book:

Leadership for future of work: 9 ways to build career edge over robots with human creativity by Dickson Tang amzn.to/2McxeIS

Dickson’s Recommended Read:
The Creative Economy: How People Make Money From Ideas by John Howkins amzn.to/2MdLotA

Podcast Link:
iTunes: math.im/jofitunes
Youtube: math.im/jofyoutube

Dickson’s BIO:
Dickson Tang is the author of Leadership for future of work: ways to build career edge over robots with human creativity book. He helps senior leaders (CEO, MD and HR) build creative and effective teams in preparation for the future / robot economy. Dickson is a leadership ideas expert, focusing on how leadership will evolve in the future of work. 15+ years of experience in management, business consulting, marketing, organizational strategies and training & development. Corporate experience with several leading companies such as KPMG Advisory, Gartner and Netscape Inc.

Dickson’s expertise on leadership, creativity and future of work have earned him invitations and opportunities to work with leaders and professionals from various organizations such as Cartier, CITIC Telecom, DHL, Exterran, Hypertherm, JVC Kenwood, Mannheim Business School, Montblanc and others.

He lives in Singapore, Asia.
LinkedIN: www.linkedin.com/in/imDicksonT
Twitter: www.twitter.com/imDicksonT
Facebook: www.facebook.com/imDicksonT
Youtube: www.youtube.com/channel/UC2b4BUeMnPP0fAzGLyEOuxQ

About #Podcast:
#JobsOfFuture is created to spark the conversation around the future of work, worker and workplace. This podcast invite movers and shakers in the industry who are shaping or helping us understand the transformation in work.

Wanna Join?
If you or any you know wants to join in,
Register your interest @ analyticsweek.com/

Want to sponsor?
Email us @ info@analyticsweek.com

Keywords:
#JobsOfFuture #FutureOfWork #FutureOfWorker #FutuerOfWorkplace #Work #Worker #Workplace

Source: Dickson Tang (@imDicksonT) on Building a Career Edge over Robots using #3iFramework #JobsOfFuture #Podcast

The Cost Of Too Much Data

Global Data
I came across this interesting infographics on “The Cost Of Too Much Data” from Lattice. It throws some light on lost dollars from lack of big-data initiative. Elaborating on the spread of data generation source and how productivity and dollars are lost from the lack of bigdata implemetation.

The Cost Of Too Much Data Infographic

Like this infographic? Get more sales and marketing information here: http://www.lattice-engines.com/resource-center/knowledge-hub

Originally Posted at: The Cost Of Too Much Data by v1shal

Nov 22, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data Accuracy  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Map of US Hospitals and their Health Outcome Metrics by bobehayes

>> Tips To Hunt For That Great Travel Deal  by v1shal

>> Google Cloud security updates for SEO before 2018 GDPR to change business data interactions! by thomassujain

Wanna write? Click Here

[ NEWS BYTES]

>>
 Enlisting Machine Learning to Fight Data Center Outages – Data Center Knowledge Under  Data Center

>>
 CHART Study Shows New Hot Spots in Mental Health, Substance Misuse Crisis – BioSpace (press release) (blog) Under  Health Analytics

>>
 Big data service gives charterers estimate of vessels’ fuel efficiency – Tanker Shipping and Trade Under  Big Data

More NEWS ? Click Here

[ FEATURED COURSE]

CPSC 540 Machine Learning

image

Machine learning (ML) is one of the fastest growing areas of science. It is largely responsible for the rise of giant data companies such as Google, and it has been central to the development of lucrative products, such … more

[ FEATURED READ]

Data Science from Scratch: First Principles with Python

image

Data science libraries, frameworks, modules, and toolkits are great for doing data science, but they’re also a good way to dive into the discipline without actually understanding data science. In this book, you’ll learn … more

[ TIPS & TRICKS OF THE WEEK]

Data Have Meaning
We live in a Big Data world in which everything is quantified. While the emphasis of Big Data has been focused on distinguishing the three characteristics of data (the infamous three Vs), we need to be cognizant of the fact that data have meaning. That is, the numbers in your data represent something of interest, an outcome that is important to your business. The meaning of those numbers is about the veracity of your data.

[ DATA SCIENCE Q&A]

Q:Name a few famous API’s (for instance GoogleSearch)
A: Google API (Google Analytics, Picasa), Twitter API (interact with Twitter functions), GitHub API, LinkedIn API (users data)…
Source

[ VIDEO OF THE WEEK]

@AnalyticsWeek Panel Discussion: Big Data Analytics

 @AnalyticsWeek Panel Discussion: Big Data Analytics

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Numbers have an important story to tell. They rely on you to give them a voice. – Stephen Few

[ PODCAST OF THE WEEK]

Dave Ulrich (@dave_ulrich) talks about role / responsibility of HR in #FutureOfWork #JobsOfFuture #Podcast

 Dave Ulrich (@dave_ulrich) talks about role / responsibility of HR in #FutureOfWork #JobsOfFuture #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

IDC Estimates that by 2020,business transactions on the internet- business-to-business and business-to-consumer – will reach 450 billion per day.

Sourced from: Analytics.CLUB #WEB Newsletter

Democratizing Self-Service Cognitive Computing Analytics with Machine Learning

There are few areas of the current data landscape that the self-service movement has not altered and positioned firmly within the grasp of the enterprise and its myriad users, from novices to the most accomplished IT personnel.

One can argue that cognitive computing and its self-service analytics have always been a forerunner of this effort, as their capability of integrating and analyzing disparate sources of big data to deliver rapid results with explanations and recommendations proves.

Historically, machine learning and its penchant for predictive analytics has functioned as the most accessible of cognitive computing technologies that include natural language processing, neural networks, semantic modeling and vocabularies, and other aspects of artificial intelligence. According to indico co-founder and CEO Slater Victoroff, however, the crux of machine learning’s utility might actually revolve around deep learning and, specifically, transfer learning.

By accessing these technologies at scale via the cloud, enterprises can now deploy cognitive computing analytics on sets of big data without data scientists and the inordinate volumes of data required to develop the models and algorithms that function at the core of machine learning.

From Machine Learning to Deep Learning
The cost, scale, and agility advantages of the cloud have resulted in numerous Machine Learning-as-a-Service vendors, some of which substantially enhance enterprise utility with Deep Learning-as-a-Service. Machine learning is widely conceived of as a subset of predictive analytics in which existing models of algorithms are informed by the results of previous ones, so that future models are formed quicker to tailor analytics according to use case or data type. According to Slater, deep learning algorithms and models “result in better accuracies for a wide variety of analytical tasks.” Largely considered a subset of machine learning, deep learning is understood as a more mature form of the former. That difference is conceptualized in multiple ways, including “instead of trying to handcraft specific rules to solve a given problem (relying on expert knowledge), you let the computer solve it (deep learning approach),” Slater mentioned.

Transfer Learning and Scalable Advantages
The parallel is completed with an analogy of machine learning likened to an infant and deep learning likened to a child. Whereas an infant must be taught everything, “a child has automatically learnt some approximate notions of what things are, and if you can build on these, you can get to higher level concepts much more efficiently,” Slater commented. “This is the deep learning approach.” That distinction in efficiency is critical in terms of scale and data science requirements, as there is a “100 to 100,000 ratio” according to Slater on the amounts of data required to form the aforementioned “concepts” (modeling and algorithm principles to solve business problems) with a deep learning approach versus a machine learning one. That difference is accounted for by transfer learning, a subset of deep learning that “lets you leverage generalized concepts of knowledge when solving new problems, so you don’t have to start from scratch,” Slater revealed. “This means that your training data sets can be one, two or even three orders of magnitude smaller in size and this makes a big difference in practical terms.”

Image and Textual Analytics on “Messy” Unstructured Data
Those practical terms expressly denote the difference between staffing multiple data scientists to formulate algorithms on exorbitant sets of big data, versus leveraging a library of preset models of service providers tailored to vertical industries and use cases. These models are also readily modified by competent developers. Providers such as indico offer these solutions for companies tasked with analyzing the most challenging “messy data sets”, as characterized by Slater. In fact, the vast forms of unstructured text and image analytics required of unstructured data is ideal for deep learning and transfer learning. “Messy data, by nature, is harder to cope with using handcrafted rules,” Slater observed. “In the case of images things like image quality, lighting conditions, etc. introduce noise. Sarcasm, double negatives, and slang are examples of noise in the text domain. Deep learning allows us to effectively work with real world noisy data and still extract meaningful signal.”

The foregoing library of models utilizing this technology can derive insight from an assortment of textual and image data including characteristics of personality, emotions, various languages, content filtering, and many more. These cognitive computing analytic capabilities are primed for social media monitoring and sentiment analysis in particular for verticals such as finance, marketing, public relations, and others.

Sentiment Analysis and Natural Language Processing
The difference with a deep learning approach is both in the rapidity and the granular nature of the analytics performed. Conventional natural language processing tools are adept at identifying specific words and spellings, and at determining their meaning in relation to additional vocabularies and taxonomies. NLP informed by deep learning can expand this utility to include entire phrases and a plethora of subtleties such as humor, sarcasm, irony and meaning that is implicit to native speakers of a particular language. Such accuracy is pivotal to gauging sentiment analysis.

Additionally, the necessity of image analysis as part of sentiment analysis and other forms of big data analytics is only increasing. Slater characterized this propensity of deep learning in terms of popular social media platforms such as Twitter, in which images are frequently incorporated. Image analysis can detect when someone is holding up a “guitar, and writes by it ‘oh, wow’,” Slater said. Without that image analysis, organizations lose the context of the text and the meaning of the entire post. Moreover, image analysis technologies can also discern meaning in various facial expressions, gestures, and other aspects of text that yield insight.

Cognitive Computing Analytics for All
The provisioning of cognitive computing analytics via MLaaS and DLaaS illustrates once again exactly how pervasive the self-service movement is. It also demonstrates the democratization of analytics and the fact that with contemporary technology, data scientists and massive sets of big data (augmented by expensive physical infrastructure) are not required to reap the benefits of some of the fundamental principles of cognitive computing and other applications of semantic technologies. Those technologies and their applications, in turn, are responsible for increasing the very power of analytics and of data-driven processes themselves.

In fact, according to Cambridge Semantics VP of Marketing John Rueter, many of the self-service facets of analytics that are powered by semantic technologies “are built for the way that we think and the way that we analyze information. Now, we’re no longer held hostage by the technology and by solving problems based upon a technological approach. We’re actually addressing problems with an approach that is more aligned with the way we think, process, and do analysis.”

Source

Nov 15, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Tour of Accounting  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> 6 Big Data Analytics Use Cases for Healthcare IT by analyticsweekpick

>> Sisense Hunch™ – Leadership Through Radical Innovation by analyticsweek

>> Next-generation supply & demand forecasting: How machine learning is helping retailers to save millions by analyticsweekpick

Wanna write? Click Here

[ NEWS BYTES]

>>
 Data Is The Foundation For Artificial Intelligence And Machine Learning – Forbes Under  Machine Learning

>>
 How to Get Into “Internet of Things” Investments – Banyan Hill Publishing Under  Internet Of Things

>>
 Beyond Big Data: The extreme data economy – Networks Asia Under  Big Data

More NEWS ? Click Here

[ FEATURED COURSE]

Python for Beginners with Examples

image

A practical Python course for beginners with examples and exercises…. more

[ FEATURED READ]

The Future of the Professions: How Technology Will Transform the Work of Human Experts

image

This book predicts the decline of today’s professions and describes the people and systems that will replace them. In an Internet society, according to Richard Susskind and Daniel Susskind, we will neither need nor want … more

[ TIPS & TRICKS OF THE WEEK]

Data aids, not replace judgement
Data is a tool and means to help build a consensus to facilitate human decision-making but not replace it. Analysis converts data into information, information via context leads to insight. Insights lead to decision making which ultimately leads to outcomes that brings value. So, data is just the start, context and intuition plays a role.

[ DATA SCIENCE Q&A]

Q:Explain selection bias (with regard to a dataset, not variable selection). Why is it important? How can data management procedures such as missing data handling make it worse?
A: * Selection of individuals, groups or data for analysis in such a way that proper randomization is not achieved
Types:
– Sampling bias: systematic error due to a non-random sample of a population causing some members to be less likely to be included than others
– Time interval: a trial may terminated early at an extreme value (ethical reasons), but the extreme value is likely to be reached by the variable with the largest variance, even if all the variables have similar means
– Data: “cherry picking”, when specific subsets of the data are chosen to support a conclusion (citing examples of plane crashes as evidence of airline flight being unsafe, while the far more common example of flights that complete safely)
– Studies: performing experiments and reporting only the most favorable results
– Can lead to unaccurate or even erroneous conclusions
– Statistical methods can generally not overcome it

Why data handling make it worse?
– Example: individuals who know or suspect that they are HIV positive are less likely to participate in HIV surveys
– Missing data handling will increase this effect as it’s based on most HIV negative
-Prevalence estimates will be unaccurate

Source

[ VIDEO OF THE WEEK]

Dave Ulrich (@dave_ulrich) talks about role / responsibility of HR in #FutureOfWork #JobsOfFuture #Podcast

 Dave Ulrich (@dave_ulrich) talks about role / responsibility of HR in #FutureOfWork #JobsOfFuture #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Without big data, you are blind and deaf and in the middle of a freeway. – Geoffrey Moore

[ PODCAST OF THE WEEK]

Discussing #InfoSec with @travturn, @hrbrmstr(@rapid7) @thebearconomist(@boozallen) @yaxa_io

 Discussing #InfoSec with @travturn, @hrbrmstr(@rapid7) @thebearconomist(@boozallen) @yaxa_io

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Retailers who leverage the full power of big data could increase their operating margins by as much as 60%.

Sourced from: Analytics.CLUB #WEB Newsletter

Hadoop demand falls as other big data tech rises

graph-36929_1280-100585494-primary.idge

Hadoop makes all the big data noise. Too bad it’s not also getting the big data deployments.

Indeed, though Hadoop has often served as shorthand for big data, this increasingly seems like a mistake. According to a new Gartner report, “despite continuing enthusiasm for the big data phenomenon, demand for Hadoop specifically is not accelerating.”

According to the survey, most enterprises have “no plans at this time” to invest in Hadoop and a mere 26 percent have either deployed or are piloting Hadoop. They are, however, actively embracing other big data technologies.

‘Fairly anemic’ interest in Hadoop

For a variety of reasons, with a lack of Hadoop skills as the biggest challenge (57 percent), enterprises aren’t falling in love with Hadoop.

Indeed, as Gartner analyst Merv Adrian suggests in a new Gartner report (“Survey Analysis: Hadoop Adoption Drivers and Challenges“):

With such large incidence of organizations with no plans or already on their Hadoop journey, future demand for Hadoop looks fairly anemic over at least the next 24 months. Moreover, the lack of near-term plans for Hadoop adoption suggest that, despite continuing enthusiasm for the big data phenomenon, demand for Hadoop specifically is not accelerating.

How anemic? Think 54 percent with zero plans to use Hadoop, plus another 20 percent that at best will get to experimenting with Hadoop in the next year:

Gartner HadoopGartner

Google’s Android for Work promises serious security, but how does it stack up against Apple’s iOS and

READ NOW

This doesn’t bode well for Hadoop’s biggest vendors. After all, as Gartner analyst Nick Huedecker posits, “Hadoop [is] overkill for the problems the business[es surveyed] face, implying the opportunity costs of implementing Hadoop [are] too high relative to the expected benefit.”

Selling the future of Hadoop

By some measures, this shortfall of interest hasn’t yet caught up with the top two Hadoop vendors, Cloudera and Hortonworks.

Cloudera, after all, will reportedly clear nearly $200 million in revenue in 2015, with a valuation of $5 billion, according to Manhattan Venture Partners. While the company is nowhere near profitability, it’s not struggling to grow and will roughly double revenue this year.

Hortonworks, for its part, just nailed a strong quarter. Annual billings grew 99 percent to $28.1 million, even as revenue exploded 167 percent to $22.8 million. To reach these numbers, Hortonworks added 105 new customers, up from 99 new customers in the previous quarter.

Still, there are signs that the hype is fading.

Hortonworks, despite beating analyst expectations handily last quarter, continues to fall short of the $1 billion-plus valuation it held at its last round of private funding. As I’ve argued, the company will struggle to justify a billion-dollar price tag due to its pure-play open source business model.

But according to the Gartner data, it may also struggle due to “fairly anemic” demand for Hadoop.

There’s a big mitigating factor. Hadoop vendors will almost surely languish — unless they’re willing to embrace adjacent big data technologies that complement Hadoop. As it happens, both leaders already have.

For example, even as Apache Spark has eaten into MapReduce interest, both companies have climbed aboard the Spark train.

But more is needed. Because big data is much more than Hadoop and its ecosystem.

For example, though the media has equated big data with Hadoop for years, data scientists have not. As Silicon Angle uncovered back in 2012 from its analysis of Twitter conversations, when data professionals talk about big data, they actually talked about NoSQL technologies like MongoDB as much or more than Hadoop:

MongoDB vs. HadoopSilicon Angle

Today those same data professionals are likely to be using MongoDB and Cassandra, both among the world’s top 10 most popular databases, rather than Hbase, which is the database of choice for Cloudera and Hortonworks, but ranks a distant #15 in terms of overall popularity, according to DB-Engines.

Buying an ecosystem

Let’s look at Gartner’s data again, this time comparing big data adoption and Hadoop adoption:

Gartner Hadoop vs. big dataGartner
A significant percentage of the delta between these two almost certainly derives from other, highly popular big data technologies such as MongoDB, Cassandra, Apache Storm, etc. They don’t fit into the current Hadoop ecosystem, but Cloudera and Hortonworks need to find ways to embrace them, or risk running out of Hadoop runway.

Nor is that the only risk.

As Aerospike executive and former Wall Street analystPeter Goldmacher told me, a major problem for Hortonworks and Cloudera is that both are spending too much money to court customers. (As strong as Hortonworks’ billings growth was, it doubled its loss on the way to that growth as it spent heavily to grow sales.)

While these companies currently have a lead in terms of distribution Goldmacher warns that Oracle or another incumbent could acquire one of them and thereby largely lobotomize the other because of its superior claim on CIO wallets and broad-based suite offerings.

Neither Cloudera nor Hortonworks can offer that suite.

But what they can do, Goldmacher goes on, is expand their own big data footprint. For example, if Cloudera were to use its $4-to-5 billion valuation to acquire a NoSQL vendor, “All of a sudden other NoSQL vendors and Hortonworks are screwed because Cloudera would have the makings of a complete architecture.”

In other words, to survive long term, Hadoop’s dominant vendors need to move beyond Hadoop — and fast.

Originally posted by Matt Asay at: http://www.infoworld.com/article/2922720/big-data/hadoop-demand-falls-as-other-big-data-tech-rises.html

Source: Hadoop demand falls as other big data tech rises

3 Big Data Stocks Worth Considering

Big data is a trend that I’ve followed for some time now, and even though it’s still in its early stages, I expect it to continue to be a game changer as we move further into the future.

smartphone tech sector news 3 Big Data Stocks Worth ConsideringAs our Internet footprint has grown, all the data we create — from credit cards to passwords and pictures uploaded on Instagram — has to be managed somehow.

This data is too vast to be entered into traditional relational databases, so more powerful tools are needed for companies to utilize the information to analyze customers’ behavior and predict what they may do in the future.

Big data makes it all possible, and as a result is one of the dominant themes for technology growth investing. We’ve invested in several of these types of companies in my GameChangers service over the years, one of which we’ll talk more about in just a moment.

First, let’s start with two of the biggest and best big data names out there. They’re among the best pure plays, and while I’m not sure the time is quite right to invest in either right now, they are both garnering some buzz in the tech world.

Big Data Stocks: Splunk (SPLK)

Splunk185 3 Big Data Stocks Worth ConsideringThe first is Splunk (SPLK). Splunk’s flagship product is Splunk Enterprise, which at its core is a proprietary machine data engine that enables dynamic creation on the fly. Users can then run queries on data without having to understand the structure of the information prior to collection and indexing.

Faster, streamlined processes mean more efficient (and more profitable) businesses.

While Splunk is very small in terms of revenues, with January 2015 fiscal year sales of just $451 million, it is growing rapidly, and I’m keeping an eye on the name as it may present a strong opportunity down the road.

However, I do not want to overpay for it. Splunk brings effective technology to the table that is gaining market acceptance, and has strong security software partners with its recent entry into security analytics. At the right price, the stock could also be a takeover candidate for a larger IT company looking to enhance its Big Data presence.

Big Data Stocks: Tableau Software (DATA)

TableauSoftware185 3 Big Data Stocks Worth ConsideringAnother name on my radar is Tableau Software (DATA), which performs similar functions as Splunk’s. Its primary product, VizQL, translates drag-and-drop actions into data queries. In this way, the company puts data directly in the hands of decision makers, without first having to go through technical specialists.

In fact, the company believes all employees, no matter what their rank in the company, can use their product, leading to the democratization of data.

DATA is also growing rapidly, even faster than Splunk. Revenues were up 78% in 2014, and 75% in the first quarter of 2015, including license revenue growth of more than 70%. That rate is expected to slow somewhat, with revenues for all of 2015 estimated to increase to a still strong 50%.

Tableau stock is also very expensive, trading at 12X expected 2015 revenues of $618 million and close to 300X projected EPS of 40 cents for the year. DATA is a little risky to buy at current levels, but it is a name to keep an eye on in any pullback.

Big Data Stocks: Red Hat (RHT)

red hat rht stock logo 185 3 Big Data Stocks Worth ConsideringThe company we made money on earlier this year in my GameChangers service isRed Hat (RHT). We booked a 15% profit in just a few months after it popped 11% on fourth-quarter earnings.

Red Hat is the world’s largest leading provider of open-source solutions, providing software to 90% of Fortune 500 companies. Some of RHT’s customers include well-known names like Sprint (S), Adobe Systems (ADBE) and Cigna Corporation (CI).

Management’s goal is to become the undisputed leader of enterprise cloud computing, and it sees its popular Linux operating system as a way to the top. If RHT is successful — as I expect it will be — Red Hat should have a lengthy period of expanded growth as corporations increasingly move into the cloud.

Red Hat’s operating results had always clearly demonstrated that its solutions are gaining greater acceptance in IT departments, as revenues had more doubled in the five years between 2009 and 2014 from $748 million to $1.53 billion. I had expected to see the strong sales growth continue throughout 2015, and it did. As I mentioned, impressive fiscal fourth-quarter results sent the shares 11% higher.

I recommended my subscribers sell their stake in the company at the end of March because I believed any further near-term upside was limited. Since then, shares have traded mostly between $75 and $80. It is now at the very top of that range and may be on the verge of breaking above it after the company reported fiscal first-quarter results last night.

Although orders were a little slow, RHT beat estimates on both the top and bottom lines in the first quarter. Earnings of 44 cents per share were up 29% quarter-over-quarter, besting estimates on the Street for earnings of 41 cents. Revenue climbed 14% to $481 million, while analysts had been expecting $472.6 million.

At this point, RHT is now back in uncharted territory, climbing to a new 52-week high earlier today. This is a company with plenty of growth opportunities ahead, and while growth may slow a bit in the near term following the stock’s impressive climb so far this year, RHT stands to gain as corporation continue to adopt additional cloud technologies.

To read the original article on InvestorPlace, click here.

Originally Posted at: 3 Big Data Stocks Worth Considering by analyticsweekpick

Nov 08, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data security  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Oracle zeroes in on Hadoop data with analytics tool by analyticsweekpick

>> 7 Lessons From Apple To Small Business by v1shal

>> The Business of Data by analyticsweekpick

Wanna write? Click Here

[ NEWS BYTES]

>>
 Can artificial intelligence help stop religious violence? – BBC News Under  Artificial Intelligence

>>
 How to Leverage True Edge Flexibility and Overcome Operational Challenges – Data Center Frontier (blog) Under  Data Center

>>
 Big Data Analytics in Healthcare Market Global 2018: Sales, Market Size, Market Benefits, Upcoming Developments … – Alter Times Under  Prescriptive Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Statistical Thinking and Data Analysis

image

This course is an introduction to statistical data analysis. Topics are chosen from applied probability, sampling, estimation, hypothesis testing, linear regression, analysis of variance, categorical data analysis, and n… more

[ FEATURED READ]

Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython

image

Python for Data Analysis is concerned with the nuts and bolts of manipulating, processing, cleaning, and crunching data in Python. It is also a practical, modern introduction to scientific computing in Python, tailored f… more

[ TIPS & TRICKS OF THE WEEK]

Data Have Meaning
We live in a Big Data world in which everything is quantified. While the emphasis of Big Data has been focused on distinguishing the three characteristics of data (the infamous three Vs), we need to be cognizant of the fact that data have meaning. That is, the numbers in your data represent something of interest, an outcome that is important to your business. The meaning of those numbers is about the veracity of your data.

[ DATA SCIENCE Q&A]

Q:Is it better to design robust or accurate algorithms?
A: A. The ultimate goal is to design systems with good generalization capacity, that is, systems that correctly identify patterns in data instances not seen before
B. The generalization performance of a learning system strongly depends on the complexity of the model assumed
C. If the model is too simple, the system can only capture the actual data regularities in a rough manner. In this case, the system poor generalization properties and is said to suffer from underfitting
D. By contrast, when the model is too complex, the system can identify accidental patterns in the training data that need not be present in the test set. These spurious patterns can be the result of random fluctuations or of measurement errors during the data collection process. In this case, the generalization capacity of the learning system is also poor. The learning system is said to be affected by overfitting
E. Spurious patterns, which are only present by accident in the data, tend to have complex forms. This is the idea behind the principle of Occam’s razor for avoiding overfitting: simpler models are preferred if more complex models do not significantly improve the quality of the description for the observations
Quick response: Occam’s Razor. It depends on the learning task. Choose the right balance
F. Ensemble learning can help balancing bias/variance (several weak learners together = strong learner)
Source

[ VIDEO OF THE WEEK]

Solving #FutureOfOrgs with #Detonate mindset (by @steven_goldbach & @geofftuff) #FutureOfData #Podcast

 Solving #FutureOfOrgs with #Detonate mindset (by @steven_goldbach & @geofftuff) #FutureOfData #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

War is 90% information. – Napoleon Bonaparte

[ PODCAST OF THE WEEK]

@AlexWG on Unwrapping Intelligence in #ArtificialIntelligence #FutureOfData #Podcast

 @AlexWG on Unwrapping Intelligence in #ArtificialIntelligence #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

In the developed economies of Europe, government administrators could save more than €100 billion ($149 billion) in operational efficiency improvements alone by using big data, not including using big data to reduce fraud and errors and boost the collection of tax revenues.

Sourced from: Analytics.CLUB #WEB Newsletter

Evaluating Hospital Quality using Patient Experience, Health Outcomes and Process of Care Measures

Patient experience (PX) has become an important topic for US hospitals. The Centers for Medicare & Medicaid Services (CMS) will be using patient feedback about their care as part of their reimbursement plan for acute care hospitals (see Hospital Value-Based Purchasing Program). Not surprisingly, hospitals are focusing on improving the patient experience to ensure they receive the maximum of their incentive payments. Additionally, US hospitals track other types of metrics (e.g., process of care and mortality rates) as measures of quality of care.

Given that hospitals have a variety of metrics at their disposal, it would be interesting to understand how these different metrics are related with each other. Do hospitals that receive higher PX ratings (e.g., more satisfied patients) also have better scores on other metrics (lower mortality rates, better process of care measures) than hospitals with lower PX ratings? In this week’s post, I will use the following hospital quality metrics:

  1. Patient Experience
  2. Health Outcomes (mortality rates, re-admission rates)
  3. Process of Care

I will briefly cover each of these metrics below.

Table 1. Descriptive Statistics for PX, Health Outcomes and Process of Care Metrics for US Hospitals (acute care hospitals only)

1. Patient Experience

Patient experience (PX) reflects the patients’ perceptions about their recent inpatient experience. PX is collected by a survey known as HCAHPS (Hospital Consumer Assessment of Healthcare Providers and Systems). HCAHPS (pronounced “H-caps“) is a national, standardized survey of hospital patients and was developed by a partnership of public and private organizations and was created to publicly report the patient’s perspective of hospital care.

The survey asks a random sample of recently discharged patients about important aspects of their hospital experience. The data set includes patient survey results for over 3800 US hospitals on ten measures of patients’ perspectives of care (e.g., nurse communication, pain well controlled). I combined two general questions (Overall hospital rating and recommend) to create a patient advocacy metric. Thus, a total of 9 PX metrics were used. Across all 9 metrics, hospital scores can range from 0 (bad) to 100 (good). You can see the PX measures for different US hospital here.

2. Process of Care

Process of care measures show, in percentage form or as a rate, how often a health care provider gives recommended care; that is, the treatment known to give the best results for most patients with a particular condition. The process of care metric is based on medical information from patient records that reflects the rate or percentage across 12 procedures related to surgical care.  Some of these procedures are related to antibiotics being given/stopped at the right times and treatments to prevent blood clots.  These percentages were translated into scores that ranged from 0 (worse) to 100 (best).  Higher scores indicate that the hospital has a higher rate of following best practices in surgical care. Details of how these metrics were calculated appear below the map.

I calculated an overall Process of Care Metric by averaging each of the 12 process of care scores. The process of care metric was used because it has good measurement properties (internal consistency was .75) and, thus reflects a good overall measure of process of care. You can see the process of care measures for different US hospital here.

3. Health Outcomes

Measures that tell what happened after patients with certain conditions received hospital care are called “Outcome Measures.” We use two general types of outcome measures: 1) 30-day Mortality Rate and 2) 30-day Readmission Rate. The 30-day risk-standardized mortality and 30-day risk-standardized readmission measures for heart attack, heart failure, and pneumonia are produced from Medicare claims and enrollment data using sophisticated statistical modeling techniques that adjust for patient-level risk factors and account for the clustering of patients within hospitals.

The death rates focus on whether patients died within 30 days of their hospitalization. The readmission rates focus on whether patients were hospitalized again within 30 days.

Three mortality rate and readmission rate measures were included in the healthcare dataset for each hospital. These were:

  1. 30-Day Mortality Rate / Readmission Rate from Heart Attack
  2. 30-Day Mortality Rate / Readmission Rate from Heart Failure
  3. 30-Day Mortality Rate / Readmission Rate from Pneumonia

Mortality/Readmission rate is measured in units of 1000 patients. So, if a hospital has a heart attack mortality rate of 15, that means that for every 1000 heart attack patients, 15 of them die get readmitted. You can see the health outcome measures for different US hospital here.

Table 2. Correlations of PX metrics with Health Outcome and Process of Care Metrics for US Hospitals (acute care hospitals only).

Results

The three types of metrics (e.g., PX, Health Outcomes, Process of Care) were housed in separate databases on the data.medicare.gov site. As explained elsewhere in my post on Big Data, I linked these three data sets together by hospital name. Basically, I federated the necessary metrics from their respective database and combined them into a single data set.

Descriptive statistics for each variable are located in Table 1. The correlations of each of the PX measures with each of the Health Outcome and Process of Care Measures is located in Table 2. As you can see, the correlations of PX with other hospital metrics is very low, suggesting that PX measures are assessing something quite different than the Health Outcome Measures and Process of Care Measures.

Patient Loyalty and Health Outcomes and Process of Care

Patient loyalty/advocacy (as measured by the Patient Advocacy Index) is logically correlated with the other measures (except for Death Rate from Heart Failure). Hospitals that have higher patient loyalty ratings have lower death rates, readmission rates and higher levels of process of care. The degree of relationship, however, is quite small (the percent of variance explained by patient advocacy is only 3%).

Patient Experience and Health Outcomes and Process of Care

Patient experience (PX) shows a complex relationship with health outcome and process of care measures. It appears that hospitals that have higher PX ratings also report higher death rates. However, as expected, hospitals that have higher PX ratings report lower readmission rates. Although statistically significant, all of the correlations of PX metrics with other hospitals metrics are low.

The PX dimension that had the highest correlation with readmission rates and process of care measures was “Given Information about my Recovery upon discharge“.  Hospitals who received high scores on this dimensions also experienced lower readmission rates and higher process of care scores.

Summary

Hospitals are tracking different types of quality metrics, metrics being used to evaluate each hospital’s performance. Three different metrics for US hospitals were examined to understand how well they are related to each other (there are many other metrics on which hospitals can be compared). Results show that the patient experience and patient loyalty are only weakly related to other hospital metrics, suggesting that improving the patient experience will have little impact on other hospital measures (health outcomes, process of care).

 

Source: Evaluating Hospital Quality using Patient Experience, Health Outcomes and Process of Care Measures by bobehayes