From Data Scientist to Diplomat

Getting Agreement on “THE PATH TO DEPLOY” your… predictive model, big data application, machine learning based algorithm, segmentation, insights, optimization model…

Battle Scars along the journey to the truth….

When you took your first class in Data Science, R or Linear Regression either at your university or at Coursera did you ever imagine you would struggle with getting your recommendations or analysis implemented?  Did you know you were also joining the diplomatic Corp?

As a good Data Scientist you go through the process of:

  • Defining the problem and objectives for your model or insights project.
  • You gather your information/data for your analysis
  • And you do all of the steps that go into any best practices analytics methods approach. Data Cleansing, Reduction, Validation, Scoring and more. 

When you go to present your analysis or data science results to your clients you end up getting push back in terms of what we call “the path to deploy.”

Frustrating though it may be, hitting this first wall in your data science career, should be viewed, as a learning experience and your greatest teacher, not only in your current role but also for how to assess the organizations analytical maturity (including top management support) when searching for your next role.  Our greatest problems are our greatest teachers.

I have a few painful memories in this area as well, but since then the diplomatic corps has been after me to enroll, one was when I was working with my data science team on behalf of the CMO in a card business to build out a Credit Card Customer Card Acquisition model. The purpose of the predictive model was to make cross selling credit cards to a retail customer easier, more efficient and more profitable.  All Noble and correct goals by all appearances.   The CMO wanted to ensure we acquired customers that not only responded to our offer but use and activate their card.  This strategy made sense not only from a profitability standpoint but also from a customer centric point of view.  The CMO who had agreed with the Decision Science team on the modeling approach and the “path to deploy” was stymied by his boss the CEO who didn’t want to implement the model as the CEO was under pressure to grow the number of card customers and was afraid of not hitting his account/widget goal.  The CMO was very frustrated as was the entire team.  So after much debate we scaled back our full “path to deploy” approach, by offering to set the model up to score and be applied to a smaller netted down number of cross sell leads, so we could prove the value of the model by showing that we may get slightly less customers but they were 5 x more profitable then random.  I call this deferring for collaborative agreement. 

Some Key Learnings

  • So if you think that Data Science is only about science, and trust me we wish it was, then think again. Decision makers are biased by their educations, judgements and what they know about a subject and the more you can show up as helpful and supportive in terms of educating the executives about the statistics and how the modeling works the more supportive they will be.  Think back to when you were hired and someone said that you had “good energy” well what does that mean?  It was a value judgement about how well they liked you, your physical presence during the interview and your ability to articulate your value story.
  • View an initial “No” as a deferral and regroup with the team to come up with an alternative test and learn approach to deploying your analysis or model. Sometimes you need to crawl before you can walk and then run. 
  • Ensure that you ask this question up front of the client: Are you the ultimate decision maker?
  • When you present your analysis results consider presenting a variety of options for executives with more detail around how each option impacts the business. Use data visualization to make your case. 
  • Try and anticipate both positive and negative analysis outcomes upfront and list them as potential challenges in the initial project definition: this way decision makers can decide if they would even go forward with the project. 
  • Use influence maps to determine who can help with pre-socializing your analysis results to convince the decision maker that the outcome is worth testing.
  • Try and understand who are the data science and analytics champions within the company and request that they be your mentors or better yet invite them to key meetings if possible.     

In Conclusion:    Become known as the Data Scientist who can drive collaborative decisions through both horizontal and vertical thinking through:

  • Listening to your clients and or corporate culture is key.
  • Getting agreement on the “path to deploy.”
  • Using Influence management techniques to persuade the client or executives to test the analysis in some way. Don’t consider it rejection just consider it deferring.
  • Demonstrate how the analysis directly aligns to the mission in the organization. For example, many firms are talking about customer centricity and your very analysis is key.  You need to remind people that test and learn and innovation and risk taking is the key to driving toward this vision.    
  • Know and cultivate executive sponsors, evangelists and champions of analytics.  The best firms will have an executive sponsor.

We would love your thoughts and if you find this article helpful please feel free to reach out to our advisors on call or our coaches if you need a sounding board before you react to one of these situations.

Tony Branda

profile-of-a-data-scientistCEO, CustomerIntelligence.net

Originally Posted at: From Data Scientist to Diplomat

Jun 25, 20: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
SQL Database  Source

[ AnalyticsWeek BYTES]

>> September 12, 2016 Health and Biotech Analytics News Roundup by pstein

>> Data Science is more than Machine Learning  by analyticsweek

>> 5 Inbound Marketing Analytics Best Practices by analyticsweekpick

Wanna write? Click Here

[ FEATURED COURSE]

The Analytics Edge

image

This is an Archived Course
EdX keeps courses open for enrollment after they end to allow learners to explore content and continue learning. All features and materials may not be available, and course content will not be… more

[ FEATURED READ]

Introduction to Graph Theory (Dover Books on Mathematics)

image

A stimulating excursion into pure mathematics aimed at “the mathematically traumatized,” but great fun for mathematical hobbyists and serious mathematicians as well. Requiring only high school algebra as mathematical bac… more

[ TIPS & TRICKS OF THE WEEK]

Finding a success in your data science ? Find a mentor
Yes, most of us dont feel a need but most of us really could use one. As most of data science professionals work in their own isolations, getting an unbiased perspective is not easy. Many times, it is also not easy to understand how the data science progression is going to be. Getting a network of mentors address these issues easily, it gives data professionals an outside perspective and unbiased ally. It’s extremely important for successful data science professionals to build a mentor network and use it through their success.

[ DATA SCIENCE Q&A]

Q:Explain selection bias (with regard to a dataset, not variable selection). Why is it important? How can data management procedures such as missing data handling make it worse?
A: * Selection of individuals, groups or data for analysis in such a way that proper randomization is not achieved
Types:
– Sampling bias: systematic error due to a non-random sample of a population causing some members to be less likely to be included than others
– Time interval: a trial may terminated early at an extreme value (ethical reasons), but the extreme value is likely to be reached by the variable with the largest variance, even if all the variables have similar means
– Data: “cherry picking”, when specific subsets of the data are chosen to support a conclusion (citing examples of plane crashes as evidence of airline flight being unsafe, while the far more common example of flights that complete safely)
– Studies: performing experiments and reporting only the most favorable results
– Can lead to unaccurate or even erroneous conclusions
– Statistical methods can generally not overcome it

Why data handling make it worse?
– Example: individuals who know or suspect that they are HIV positive are less likely to participate in HIV surveys
– Missing data handling will increase this effect as it’s based on most HIV negative
-Prevalence estimates will be unaccurate

Source

[ VIDEO OF THE WEEK]

Using Analytics to build A #BigData #Workforce

 Using Analytics to build A #BigData #Workforce

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Numbers have an important story to tell. They rely on you to give them a voice. – Stephen Few

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @MPFlowersNYC, @enigma_data

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @MPFlowersNYC, @enigma_data

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

30 Billion pieces of content shared on Facebook every month.

Sourced from: Analytics.CLUB #WEB Newsletter

Eradicating Silos Forever with Linked Enterprise Data

The cry for linked data began innocuously enough with the simple need to share data. It has reverberated among countless verticals, perhaps most ardently in the health care space, encompassing both the public and private sectors. The advantages of linked enterprise data can positively affect any organization’s ROI and include:

  • Greater agility
  • More effective data governance implementation
  • Coherent data integration
  • Decreased time to action for IT
  • Increased trust in data

Still, the greatest impact that linked data has on the enterprise is its penchant to interminably vanquish the silo-based culture that still persists, and which stands squarely in the way of allowing true data culture to manifest.

According to TopQuadrant Managing Director David Price, for many organizations, “The next natural step in cases where they have data about the same thing that comes from different systems is to try to make links between those so they can have one single view about sets of data.”

And, if those links are managed correctly, they may very well lead to the proverbial single version of the truth.

From Linked Open Data…
The concept of linked enterprise data stems directly from linked open data, which has typically operated at the nexus between the public and private sectors (although it can involve either one singularly) and enabled organizations to link to and access data that are not theirs. Because of the uniform approach of semantic technologies, that data is exchangeable with virtually any data management system that utilizes smart data techniques. “As long as we make sure that all of the data that we put in a semantic data lake adheres to standard RDF technology and we use standard ontologies and taxonomies to format the data, they’re already integrated,” said Franz CEO Jans Aasman. “You don’t have to do anything; you can just link them together.” Thus, organizations in the private sector can readily integrate public sector linked open data into their analytics and applications in a time frame that largely bypasses typical pain points of integration and data preparation.

Such celerity could prove influential in massive data sharing endeavors like the Pentagon Papers, in which data were exchanged across international borders and numerous databases to help journalists track down instances of financial fraud. Price is leading TopQuadrant’s involvement in the Virtual Construction for Roads (V-CON) project, in which the company is contributing to an IT system that harmonizes data for road construction between a plethora of public and private sector entities in Holland and Sweden. When asked if TopQuadrant’s input on the project was based on integrating and linking data among these various parties, Price commented, “That’s exactly where the focus is.”

…to Linked Enterprise Data Insight
Linked data technologies engender an identical effect when deployed within the enterprise. In cases in which different departments require the same data for different purposes, or in instances in which there are multiple repositories or applications involving the same data, linked enterprise data can provide a comprehensive data source comprised of numerous tributaries relevant for all applications. “The difference is this stuff is enabled to also allow you to extract all the data and make it available for anybody to download it… and that includes locally,” Price commented. “You get more flexibility and less vendor lock-in by using standards.” In what might be the most compelling use case for linked enterprise data, organizations can also link all of their data–stemming from internal and external sources–for a more profound degree of analytics based on relationship subtleties that semantic technologies instinctively perceive. Cambridge Semantics VP of Marketing John Rueter weighed in on these benefits when leveraged at scale.

“That scale is related to an almost sort of instantaneous querying and results of an entire collection of data. It has eliminated that linear step-wise approach of multiple links or steps to get at that data. The fact that you’re marrying the combination of scale and speed you’re also, I would posit, getting better insights and more precise and accurate results based upon the sets of questions you’re asking given that you’ve got the ability to access and look at all this data.”

Agile Flexibility
Linked enterprise data allows all data systems to share ontologies—semantic models—that readily adjust to include additional models and data types. The degree of flexibility they facilitate is underscored by the decreased amounts of data preparation and maintenance required to sustain what is in effect one linked system. Instead of addressing modeling requirements and system updates individually, linked enterprise data systems handle these facets of data management holistically and, in most instances, singularly. Issuing additional requirements or updating different databases in a linked data system necessitates doing so once in a centralized manner that is simultaneously reflected in the individual components of the linked data systems. “In a semantic technology approach the data model or schema or ontology is actually its own data,” Price revealed. “The schema is just more data and the data in some database that represents me, David Price, can actually be related to different data models at the same time in the same database.” This sort of flexibility makes for a much more agile environment in which IT teams and end users spend less time preparing data, and more reaping their benefits.

Data Governance Ramifications
Although linked enterprise data doesn’t formally affect data governance as defined as the rules, roles, and responsibilities upon which sustainable use of data depends, it greatly improves its implementation. Whether ensuring regulatory compliance or reuse of data, standards-based environments furnish consistent semantics and metadata that are understood in a uniform way—across as many different systems as an enterprise has. One of the most pivotal points for implementing governance policy is ensuring that organizations are utilizing the same terms for the same things, and vice versa. “The difference our technology brings is that things are much more flexible and can be changed more easily, and the relationships between things can be made much more clear,” Price remarked about the impact of linked data on facilitating governance. Furthermore, the uniform approach of linked data standards ensures that “the items that are managed are accurate, complete, have a good definition that’s understandable by discipline experts, and that sometimes have a more general business glossary definition and things like that,” he added.

Security
There are multiple facets of data governance that are tied to security, such as who has the authority to view which data and how. In a linked data environment such security is imperative, particularly when sharing data across the public and private sectors. Quite possibly, security measures are reinforced even more in linked data settings than in others, since they are fortified by conventional security methods and those particular to smart data technologies. The latter involves supplementing traditional data access methods with semantic statements or triples; the former includes any array of conventional methods to protect the enterprise and its data. “The fact that you use a technology that enables things to be public doesn’t mean they have to be,” Price said. “Then you put on your own security policies. It’s all stored in a database that can be secured at various levels of accessing the database.”

Eradicating Silos
Implicit in all of the previously mentioned benefits is the fact that linked enterprise data effectively eradicates the proliferation of silos which has long complicated data management as a whole. Open data standards facilitate much more fluid data integration while decreasing temporal aspects of data preparation, shifting the emphasis on insight and action. This ability to rid the enterprise of silos is one which transcends verticals, a fact which Price readily acknowledged. “Our approach to the V-Con project is that although the organizations involved in this are National Roads Authority, our view is that the problem they are trying to solve is a general one across more than the roads industry.” In fact, it is applicable to the enterprise in general, particularly that which is attempting to sustain its data management in a long-term, streamlined manner to deliver both cost and performance boons.

Source: Eradicating Silos Forever with Linked Enterprise Data

Autonomous vehicles: Three key use cases of advanced analytics shaping the industry

Originally featured on IoT Tech News.

By Akihiro Kurita

Driven by analytics, the culture of the automobile, including conventional wisdom about how it should be owned and driven is changing. Case in point, take the evolution of the autonomous vehicle. Already, the very notion of what a car is capable of is being radically rethought based on specific analytics use cases, and the definition of the ‘connected car’ is evolving daily.

Vehicles can now analyse information from drivers and passengers to provide insights into driving patterns, touch point preferences, digital service usage, and vehicle condition, in virtually real time. This data can be used for a variety of business-driven objectives, including new product development, preventive and predictive maintenance, optimised marketing, up selling, and making data available to third parties. It’s not only powering the vehicle itself, but completely reshaping the industry.

By using a myriad of sensors to inform decisions traditionally made by human operatives, analytics is completely reprogramming the fundamental areas of driving perception, decision making and operational information. In this article, we discuss a few of the key analytics-driven use cases that we are likely to see in the future as this category, (ahem) accelerates.

The revolution of driverless vehicles

Of course, in the autonomous vehicle, the major aspect missing is the driver, traditionally the eyes and ears of the journey. Replicating the human functions is one of the major ways in which analytics is shaping the industry. Based on a series of sensors, the vehicle gathers data on nearby objects, like their size and rate of speed and categorizes them based on how they are likely to behave. Combined with technology that is able to build a 3D map of the road, it helps it then to form a clear picture of its immediate surroundings.

Now the vehicle can see, but it requires analytics to react and progress accordingly taking into account the other means of transportation in the vicinity, for instance. By using data to understand perception, analytics is creating a larger connected network of vehicles that are able to communicate with each other. In making the technology more and more reliable, self-driving vehicles have the potential to eventually become safer than human drivers and replace those in the not so distant future. In fact, a little over one year ago, two self-driving buses were trialed on the public roads of Helsinki, Finland, alongside traffic and commuters. They were the first trials of their kind with the Easymile EZ-10 electric mini-buses, capable of carrying up to 12 people.

Artificial intelligence driving the innovation and decision making

In the autonomous vehicle, one of the major tasks of a machine learning algorithm is continuous rendering of environment and forecasting the changes that are possible to these surroundings. Indeed, the challenge facing autonomous means of transportation is not so much capturing the world around them, but making sense of it. For example, a car can tell when a pedestrian is ready to cross the street by observing behaviour over and over again. Algorithms can sort through what is important, so that the vehicle will not need to push the brakes every time a small bird crosses its path.

That is not to say we are about to become obsolete. For the foreseeable future, human judgement is still critical and we’re not at the stage of abandoning complex judgement calls to algorithms. While we are in the process of ‘handing over’ anything that can be automated with some intelligence, complex human judgement is still needed. As time goes on, Artificial Intelligence (AI) ‘judgement’ will be improved but the balance is delicate – not least because of the clear and obvious concerns over safety.

How can we guarantee road safety?

Staying safe on the road is understandably one of the biggest focuses when it comes to automated means of transportation. A 2017 study by Deloitte found that three-quarters of Americans do not trust autonomous vehicles. Perhaps this is unsurprising as trust in new technology takes time – it took many years before people lost fear of being rocketed through the stratosphere at 500mph in an aeroplane.

There can, and should, be no limit to the analytics being applied to every aspect of autonomous driving – from the manufacturers, to the technology companies, understanding each granular piece of information is critical. But, it is happening. Researchers at the Massachusetts Institute of Technology are asking people worldwide how they think a robot car should handle such life-or-death decisions. Its goal is not just for better algorithms and ethical tenets to guide autonomous vehicles, but to understand what it will take for society to accept the vehicles and use them.

Another big challenge is determining how long fully automated vehicles must be tested before they can be considered safe. They would need to drive hundreds of millions of miles to acquire enough data to demonstrate their safety in terms of deaths or injuries. That’s according to an April 2016 report from think tank RAND Corp. Although, only this month, a mere 18 months since that report was released, professor Amnon Shashua, Mobileye CEO and Intel senior vice president, announced the company has developed a mathematical formula that reportedly ensures that a “self-driving vehicle operates in a responsible manner and does not cause accidents for which it can be blamed”

Transforming transportation and the future

In many industries, such as retail, banking, aviation, and telecoms, companies have long used the data they gather from customers and their connected devices to improve products and services, develop new offerings, and market more effectively. The automotive industry has not had the frequent digital touch points to be able to do the same. The connected vehicle changes all that.

Data is transforming the way we think about transportation and advanced analytics has the potential to make driving more accessible and safe, by creating new insights to open up new opportunities. As advanced analytics and AI become the new paradigm in transportation, the winners will be those who best interpret the information to create responsive vehicles as simple as getting from A to B.

The post Autonomous vehicles: Three key use cases of advanced analytics shaping the industry appeared first on Think Big.

Originally Posted at: Autonomous vehicles: Three key use cases of advanced analytics shaping the industry

Jun 18, 20: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Data security  Source

[ AnalyticsWeek BYTES]

>> Multi-Session & Multi-Channel Funnel Reporting in Google Analytics BigQuery by administrator

>> A CS Degree for Data Science — Part I, Efficient Numerical Computation by michael-li

>> Can the internet be decentralized through blockchain technology? by administrator

Wanna write? Click Here

[ FEATURED COURSE]

Artificial Intelligence

image

This course includes interactive demonstrations which are intended to stimulate interest and to help students gain intuition about how artificial intelligence methods work under a variety of circumstances…. more

[ FEATURED READ]

Machine Learning With Random Forests And Decision Trees: A Visual Guide For Beginners

image

If you are looking for a book to help you understand how the machine learning algorithms “Random Forest” and “Decision Trees” work behind the scenes, then this is a good book for you. Those two algorithms are commonly u… more

[ TIPS & TRICKS OF THE WEEK]

Finding a success in your data science ? Find a mentor
Yes, most of us dont feel a need but most of us really could use one. As most of data science professionals work in their own isolations, getting an unbiased perspective is not easy. Many times, it is also not easy to understand how the data science progression is going to be. Getting a network of mentors address these issues easily, it gives data professionals an outside perspective and unbiased ally. It’s extremely important for successful data science professionals to build a mentor network and use it through their success.

[ DATA SCIENCE Q&A]

Q:Explain the difference between “long” and “wide” format data. Why would you use one or the other?
A: * Long: one column containing the values and another column listing the context of the value Fam_id year fam_inc

* Wide: each different variable in a separate column
Fam_id fam_inc96 fam_inc97 fam_inc98

Long Vs Wide:
– Data manipulations are much easier when data is in the wide format: summarize, filter
– Program requirements

Source

[ VIDEO OF THE WEEK]

@AnalyticsWeek Panel Discussion: Health Informatics Analytics

 @AnalyticsWeek Panel Discussion: Health Informatics Analytics

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

He uses statistics as a drunken man uses lamp posts—for support rather than for illumination. – Andrew Lang

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @DavidRose, @DittoLabs

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @DavidRose, @DittoLabs

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

2.7 Zetabytes of data exist in the digital universe today.

Sourced from: Analytics.CLUB #WEB Newsletter

Choosing an Analytics Development Approach: How to Optimize Your Business Benefits

The question of embedding analytics in an application has moved from “Whether to?” to “How to?” But on the road to embedded analytics, many companies get caught up in the “build versus buy” debate. This inevitably stalls projects and delays time to revenue.

>> Related: Why “Build or Buy?” Is the Wrong Question for Analytics <<

As it turns out, “build or buy” is a false dichotomy. Application teams searching for an analytics solution have a third option: They can take a combined approach by purchasing an analytics development platform and customizing it. This method lets companies get to market faster than building and supports more sophisticated features than buying a bolt-on analytics solution.

How do these three development methods—build, buy, or the combined approach—compare?

The first instinct for many software teams is to build exactly what they want using open-source code libraries and charting components. This may work well, at least until customers began expecting more sophisticated capabilities. According to the 2018 State of Embedded Analytics Report, end users are less satisfied with homegrown analytics compared to a combined approach. In addition, application teams that build analytics on their own using custom code and components see worse results in terms of user experience, differentiation from the competition, and attracting new users.

If building analytics on your own is no longer a realistic option for long-term success, application teams must buy a solution. But they still have two choices: Either embed a bolt-on data discovery tool or take a combined approach and leverage an analytics development platform.

At first glance, buying a solution from a third-party vendor may seem like the way to go—especially if you’ve fallen behind the market and need a way to quickly update your analytics. But time and again, companies that choose a bolt-on approach suffer in the long run. The 2018 State of Embedded Analytics Report shows that teams that buy a bolt-on analytics solution see far fewer business benefits than those choosing to build their own solution or take a combined approach by purchasing an analytics development platform.

Compared to buying a bolt-on solution, a combined approach supports a more differentiated product, improves win rates, reduces customer churn, and boosts overall revenue. It also positively impacts the end users, resulting in better user adoption, user satisfaction, and user experiences.

In particular, a large gap exists when it comes to increasing overall revenue. According to the survey, application teams that took a combined approach when embedding analytics were 19 percentage points more likely to increase revenue than those that bought a bolt-on solution.

This revenue gap may be due in part to an unintended price cap that results from bolt-on solutions. Seventy-four percent of commercial applications taking a combined approach are able to do so. This is significantly more than the 60 percent of commercial companies that bought a bolt-on solution.

Why does the combined approach enable companies to charge more? Because bolt-on solutions tend to support only limited capabilities and focus on commoditized features such as standard interactive dashboards and data visualizations, while taking a combined approach means companies are more likely to embed sophisticated analytics capabilities that set their applications apart from the competition.

It’s clear that companies with the most successful analytic applications have one thing in common: They don’t build OR buy, strictly speaking. Instead, they leverage an analytics development platform to quickly deliver the most robust capabilities to the market.

Read the 2018 State of Embedded Analytics Report >

 

 

Originally Posted at: Choosing an Analytics Development Approach: How to Optimize Your Business Benefits

AI Mannequins Fight City Crimes – Weekly Guide

This week’s Artificial Intelligence (AI) guide discusses its potential to develop supercharge batteries, antibiotics, and much more.   AI Used to Develop Supercharge Batteries  Stanford has developed a new Machine Learning method along with Toyota researchers, which can now supercharge batteries for the development of electric vehicles. Just as supercharging was introduced into the Smartphone, today, […]

The post AI Mannequins Fight City Crimes – Weekly Guide appeared first on GreatLearning.

Originally Posted at: AI Mannequins Fight City Crimes – Weekly Guide by administrator

Jun 11, 20: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Human resource  Source

[ FEATURED COURSE]

The Analytics Edge

image

This is an Archived Course
EdX keeps courses open for enrollment after they end to allow learners to explore content and continue learning. All features and materials may not be available, and course content will not be… more

[ FEATURED READ]

The Black Swan: The Impact of the Highly Improbable

image

A black swan is an event, positive or negative, that is deemed improbable yet causes massive consequences. In this groundbreaking and prophetic book, Taleb shows in a playful way that Black Swan events explain almost eve… more

[ TIPS & TRICKS OF THE WEEK]

Grow at the speed of collaboration
A research by Cornerstone On Demand pointed out the need for better collaboration within workforce, and data analytics domain is no different. A rapidly changing and growing industry like data analytics is very difficult to catchup by isolated workforce. A good collaborative work-environment facilitate better flow of ideas, improved team dynamics, rapid learning, and increasing ability to cut through the noise. So, embrace collaborative team dynamics.

[ DATA SCIENCE Q&A]

Q:Explain what a false positive and a false negative are. Why is it important these from each other? Provide examples when false positives are more important than false negatives, false negatives are more important than false positives and when these two types of errors are equally important
A: * False positive
Improperly reporting the presence of a condition when it’s not in reality. Example: HIV positive test when the patient is actually HIV negative

* False negative
Improperly reporting the absence of a condition when in reality it’s the case. Example: not detecting a disease when the patient has this disease.

When false positives are more important than false negatives:
– In a non-contagious disease, where treatment delay doesn’t have any long-term consequences but the treatment itself is grueling
– HIV test: psychological impact

When false negatives are more important than false positives:
– If early treatment is important for good outcomes
– In quality control: a defective item passes through the cracks!
– Software testing: a test to catch a virus has failed

Source

[ VIDEO OF THE WEEK]

Making sense of unstructured data by turning strings into things

 Making sense of unstructured data by turning strings into things

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

What we have is a data glut. – Vernon Vinge

[ PODCAST OF THE WEEK]

@DrewConway on fabric of an IOT Startup #FutureOfData #Podcast

 @DrewConway on fabric of an IOT Startup #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Retailers who leverage the full power of big data could increase their operating margins by as much as 60%.

Sourced from: Analytics.CLUB #WEB Newsletter

Big Data Provides Big Insights for U.S. Hospitals

The U.S. government provides a variety of publicly available databases that include metrics on the performance of US hospitals, including patient experience (PX) database, health outcome database, process of care database and medical spending database. Applying Big Data principles on these disparate data sources, I integrated different metrics from their respective databases to better understand the quality of US hospitals and determine ways they can improve the patient experience and the overall healthcare delivery system. I spent the summer analyzing this data, and wrote many posts about it.

Why the Patient Experience (PX) has Become an Important Topic for U.S. Hospitals

The Centers for Medicare & Medicaid Services (CMS) will be using patient feedback about their care as part of their reimbursement plan for acute care hospitals (see Hospital Value-Based Purchasing (VBP) program). The purpose of the VBP program is to promote better clinical outcomes for patients and improve their experience of care during hospital stays. Not surprisingly, hospitals are focusing on improving the patient experience to ensure they receive the maximum of their incentive payments.

Key Findings from Analyses of Big Data of US Hospitals

Hospitals, like all big businesses, struggle with knowing “if you do this, then you will succeed with this.” While hospital administrators can rely on gut feelings, intuition and anecdotal evidence to guide their decisions on how to improve their hospitals, data-driven decision-making provides better, more reliable, insights about real things hospital administrators can do to improve their hospitals. While interpretation of my analyses of these Big Data are debatable, the data are what they are.

I have highlighted some key findings below (with accompanying blog posts) that provide value for different constituencies: 1) healthcare consumers can find the best hospitals, 2) healthcare providers can focus on areas that improve how they deliver healthcare, and 3) healthcare researchers can uncover deeper insights about factors that impact the patient experience and health outcomes.

  1. Healthcare Consumers Can Use Interactive Maps of US Hospital Ratings to Select the Best Provider. Healthcare consumers can use interactive maps to understand the quality of their hospitals with respect to three metrics: 1) Map of US hospitals on patient satisfaction, 2) Map of US hospitals on health outcomes, and 3) Map of US hospitals on process of care. Take a look at each to know how your hospital performs.
  2. Hospitals Can Use Patient Surveys to Improve Patient Loyalty. Hospitals might be focusing on the wrong areas to improve patient loyalty. While researchers found that hospitals’ top 3 priorities to improve the patient experience are focused on 1) reducing noise, 2) improving patient rounding and 3) the improving the discharge process and instructions, analysis of HCAHPS survey results show that hospitals will likely receive greater return on their improvement investment (ROI) if they focus on improving the patient experience along these dimensions: 1) pain management, 2) staff responsiveness and 3) staff explaining meds.
  3. There are Differences in the Patient Experience across Acute Care and Critical Access Hospitals. Acute care hospitals receive lower patient satisfaction ratings compared to critical access hospitals. Differences across these two types of hospitals also extends to ways to improve the patient experience. The key areas for improving patient loyalty/advocacy differ across hospital types. ACHs need to focus on 1) Staff explains meds, 2) Responsiveness and 3) Pain management. CAHs need to focus on 1) Pain management and 2) Responsiveness.
  4. Patient Satisfaction is Related to Health Outcomes and Process of Care Measures. The patient experience that had the highest correlation with readmission rates and process of care measures was “Given Information about my Recovery upon discharge“.  Hospitals who received good patient ratings on this dimension also experienced lower readmission rates and higher process of care scores compared to hospitals with poor patient ratings in this area.
  5. Medical Spending is Not Related to Patient Satisfaction. I found that hospitals with lower medical spend per patient are able to deliver a comparable patient experience to hospitals with greater medical spend per patient.

The value of insights gained from combining, integrating disparate databases (especially ones including both attitudinal and operational/objective metrics) provide much greater value than any single database can provide by itself.  That is one of the major values of using Big Data principles. The integrated health care Big Data set provided rich in insights and allowed us to answer bigger questions about how to best improve the patient experience and health outcomes.

Originally Posted at: Big Data Provides Big Insights for U.S. Hospitals by bobehayes

Jun 04, 20: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Conditional Risk  Source

[ AnalyticsWeek BYTES]

>> Understanding & Maintaining Data Quality in Digital Analytics by administrator

>> How a WordPress Backup and Restore Strategy Gets Your Site Back Online Fast After a Cyber Attack by administrator

>> #FutureOfData with @CharlieDataMine, @Oracle discussing running analytics in an enterprise by v1shal

Wanna write? Click Here

[ FEATURED COURSE]

R Basics – R Programming Language Introduction

image

Learn the essentials of R Programming – R Beginner Level!… more

[ FEATURED READ]

Superintelligence: Paths, Dangers, Strategies

image

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but … more

[ TIPS & TRICKS OF THE WEEK]

Strong business case could save your project
Like anything in corporate culture, the project is oftentimes about the business, not the technology. With data analysis, the same type of thinking goes. It’s not always about the technicality but about the business implications. Data science project success criteria should include project management success criteria as well. This will ensure smooth adoption, easy buy-ins, room for wins and co-operating stakeholders. So, a good data scientist should also possess some qualities of a good project manager.

[ DATA SCIENCE Q&A]

Q:How to clean data?
A: 1. First: detect anomalies and contradictions
Common issues:
* Tidy data: (Hadley Wickam paper)
column names are values, not names, e.g. 26-45…
multiple variables are stored in one column, e.g. m1534 (male of 15-34 years’ old age)
variables are stored in both rows and columns, e.g. tmax, tmin in the same column
multiple types of observational units are stored in the same table. e.g, song dataset and rank dataset in the same table
*a single observational unit is stored in multiple tables (can be combined)
* Data-Type constraints: values in a particular column must be of a particular type: integer, numeric, factor, boolean
* Range constraints: number or dates fall within a certain range. They have minimum/maximum permissible values
* Mandatory constraints: certain columns can’t be empty
* Unique constraints: a field must be unique across a dataset: a same person must have a unique SS number
* Set-membership constraints: the values for a columns must come from a set of discrete values or codes: a gender must be female, male
* Regular expression patterns: for example, phone number may be required to have the pattern: (999)999-9999
* Misspellings
* Missing values
* Outliers
* Cross-field validation: certain conditions that utilize multiple fields must hold. For instance, in laboratory medicine: the sum of the different white blood cell must equal to zero (they are all percentages). In hospital database, a patient’s date or discharge can’t be earlier than the admission date
2. Clean the data using:
* Regular expressions: misspellings, regular expression patterns
* KNN-impute and other missing values imputing methods
* Coercing: data-type constraints
* Melting: tidy data issues
* Date/time parsing
* Removing observations

Source

[ VIDEO OF THE WEEK]

#FutureOfData Podcast: Peter Morgan, CEO, Deep Learning Partnership

 #FutureOfData Podcast: Peter Morgan, CEO, Deep Learning Partnership

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Hiding within those mounds of data is knowledge that could change the life of a patient, or change the world. – Atul Butte, Stanford

[ PODCAST OF THE WEEK]

@RCKashyap @Cylance on State of Security & Technologist Mindset #FutureOfData #Podcast

 @RCKashyap @Cylance on State of Security & Technologist Mindset #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

100 terabytes of data uploaded daily to Facebook.

Sourced from: Analytics.CLUB #WEB Newsletter