Big Data Is No Longer Confined to the Big Business Playbook

Only 18 percent of small businesses and just more than half (57 percent) of mid-size companies use business intelligence and analytics solutions, according to market research firm The SMB Group.

What about the others?

Many smaller businesses are reluctant to invest in leading-edge technologies. Limited capital or the lack of the right staffperson might prompt even the most forward-thinking companies to avoid innovations or postpone such a move until they reach a certain revenue or profit goal.

It’s an erroneous notion among small business owners and decision-makers that big data is too complex or something only big companies can afford to try out. Even the name — the “big” in big data — can seem off-putting. But it’s not as tough to dive into big data as small companies might think and the payoff can be significant.

Advances in user interfaces, automation and cognitive computing are removing the barriers to adoption of big-data tools and they are now at costs that small businesses can afford. How does free sound?

Can you imagine the impact when a small-business owner is able to sort through volumes of internal and external data about his or her business and then lets any employee, in any role, to make insightful decisions and engage customers more effectively?

What if I told you that you don’t have to imagine, that tapping into critical data that could change the way your

Today, any employee can use analytics to make data-driven decisions that directly address his or her business problems without having to worry about the underlying technology or needing an in-house data scientist with specialty skills in analytics.

Solutions are now available (including Watson Analytics developed by my company, IBM) that are designed not only for data scientists and analysts but for every business professional who uses data.

There are extremely powerful tools that can help knowledge workers  find insightful perspectives and answer a whole host of questions they might have about their area of business using natural language, just like using a search engine, but far more meaningful.

This means smaller businesses can take advantage of their speed and customer proximity and, combined with new data insights, really be game changes.

IBM estimated in 2013 that with the rapid spread of mobile devices and the “Internet of Things,” the world is generating more than 2.5 billion gigabytes of data every single day. These vast sets of data are an organization’s most precious natural resource — whether that data is structured in databases or is the kind of information that comes from blog posts, customer-support chat sessions or even social networks like Twitter.

When analytics is applied to big data, an organization can change the way it makes decisions. Business processes improve, customer engagement becomes more personalized and new markets can be created as needs emerge.

A good example of this is Tacoma, Wash.-based Point Defiance Zoo & Aquarium, a client of IBM. On a daily basis, millions of data records are generated about visitors exhibit preferences, along with significant consumer feedback generated on social channels, such as Facebook.

The zoo used big-data analytics to uncover patterns and trends in its data to help drive its ticket sales and enhance visitor experiences. As a result, Point Defiance Zoo’s online ticket sales grew more than 700 percent in one year.

This is just one example of an organization’s using its data to drive decisions and dramatically increasing revenue — even thought it has fewer than 100 people and no data scientists on its payroll.

Small business owners can test out big-data analytics and see the benefits for themselves. The following steps are ways that managers can get started and reap the benefits:

 

1. Identify your challenges. 

Understand the opportunity that big data and analytics can present for your company. Set some goals whether to save on costs, increase the return on investment, growth and expansion.

2 Get to know your data. 

Start by looking at the data your organization is creating and understanding where it’s coming from, including from social networks, business activities and software applications for sales or marketing. Knowing what you have to work with is a critical step.

3. Identify the information that’s most useful. 

Based on the data that your organization is already generating, figure out which types will have the most impact on your business.

Consider these questions: Would mining customer sentiment on social networks help to improve product development and customer service? Can you use sales and marketing data to improve growth and revenue?

Focus on your customers. Historically, the main focus of IT has been on automating and driving cost savings in the back-end systems of record.

Today, the focus is increasingly shifting to systems of engagement. When diving into your data, think about  how to drive top-line revenue growth by using data to find new customers and partners and deliver  real-time value to them in unique and unexpected ways.

 

4. Explore. 

Choosing the right technology tailored for your organization’s needs will be crucial to your company’s big-data analytics success. There are free versions of powerful solutions available today that provide a good representation of features so you can receive a taste of what they can do. These features will often provide enough benefit to make a difference immediately.

5. Consider using the cloud.

The rise of the cloud is having a dramatic impact, making big-data analytics technologies within reach for small businesses and startups. By putting analytics in the cloud there’s minimal cost and infrastructure requirements. You can drive down costs and relay the resulting savings to product development and customer service while extracting critical insights for your business.

6. Tap the power of  peers.

Communities like StartupNation or Midsize Insider are ideal forums for investigating new solutions and posing questions. They are also a great way to identify local IT services companies that have a level of expertise in analytics technologies and can work with you to apply them to your particular business need.

Originaly posted via “Big Data Is No Longer Confined to the Big Business Playbook”

Source: Big Data Is No Longer Confined to the Big Business Playbook

Sep 27, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Conditional Risk  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Why Entrepreneurship Should Be Compulsory In Schools by v1shal

>> Announcing RStudio and Databricks Integration by analyticsweek

>> Why bottom-up innovation is better than top-down innovation? by v1shal

Wanna write? Click Here

[ NEWS BYTES]

>>
 Artificial intelligence has learned to probe the minds of other computers – Science Magazine Under  Artificial Intelligence

>>
 Accenture Acquires Big Data Analytics, AI Consulting Firm Kogentix – ChannelE2E Under  Analytics

>>
 Social and behavioral analytics experts speak at Northwestern – Northwestern University NewsCenter Under  Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Intro to Machine Learning

image

Machine Learning is a first-class ticket to the most exciting careers in data analysis today. As data sources proliferate along with the computing power to process them, going straight to the data is one of the most stra… more

[ FEATURED READ]

Superintelligence: Paths, Dangers, Strategies

image

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but … more

[ TIPS & TRICKS OF THE WEEK]

Grow at the speed of collaboration
A research by Cornerstone On Demand pointed out the need for better collaboration within workforce, and data analytics domain is no different. A rapidly changing and growing industry like data analytics is very difficult to catchup by isolated workforce. A good collaborative work-environment facilitate better flow of ideas, improved team dynamics, rapid learning, and increasing ability to cut through the noise. So, embrace collaborative team dynamics.

[ DATA SCIENCE Q&A]

Q:What is A/B testing?
A: * Two-sample hypothesis testing
* Randomized experiments with two variants: A and B
* A: control; B: variation
* User-experience design: identify changes to web pages that increase clicks on a banner
* Current website: control; NULL hypothesis
* New version: variation; alternative hypothesis

Source

[ VIDEO OF THE WEEK]

Rethinking classical approaches to analysis and predictive modeling

 Rethinking classical approaches to analysis and predictive modeling

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Without big data, you are blind and deaf and in the middle of a freeway. – Geoffrey Moore

[ PODCAST OF THE WEEK]

@SidProbstein / @AIFoundry on Leading #DataDriven Technology Transformation #FutureOfData #Podcast

 @SidProbstein / @AIFoundry on Leading #DataDriven Technology Transformation #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

73% of organizations have already invested or plan to invest in big data by 2016

Sourced from: Analytics.CLUB #WEB Newsletter

#Compliance and #Privacy in #Health #Informatics by @BesaBauta

#Compliance and #Privacy in #Health #Informatics by @BesaBauta

In this podcast @BesaBauta from MeryFirst talks about the compliance and privacy challenges faced in hyper regulated industry. With her experience in health informatics, Besa shared some best practices and challenges that are faced by data science groups in health informatics and other similar groups in regulated space. This podcast is great for anyone looking to learn about data science compliance and privacy challenges.

Besa’s Recommended Read:
The Art Of War by Sun Tzu and Lionel Giles https://amzn.to/2Jx2PYm

Podcast Link:
iTunes: http://math.im/itunes
GooglePlay: http://math.im/gplay

Besa’s BIO:
Dr. Besa Bauta is the Chief Data Officer and Chief Compliance Officer for MercyFirst, a social service organization providing health and mental health services to children and adolescents in New York City. She oversees the Research, Evaluation, Analytics, and Compliance for Health (REACH) division, including data governance and security measures, analytics, risk mitigation, and policy initiatives.
She is also an Adjunct Assistant Professor at NYU, and previously worked as a Research Director for a USAID project in Afghanistan, and as the Senior Director of Research and Evaluation at the Center for Evidence-Based Implementation and Research (CEBIR). She holds a Ph.D. in implementation science with a focus on health services, an MPH in Global Health and an MSW. Her research has focused on health systems, mental health, and integration of technology to improve population-level outcomes.

About #Podcast:
#FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

Want to sponsor?
Email us @ info@analyticsweek.com

Keywords:
#FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Source: #Compliance and #Privacy in #Health #Informatics by @BesaBauta by v1shal

Sep 20, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Human resource  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Map of US Hospitals and their Process of Care Metrics by bobehayes

>> What Is A Creative Data Scientist Worth? by analyticsweekpick

>> Creative Destruction and Risk Taking by ehenry

Wanna write? Click Here

[ FEATURED COURSE]

Python for Beginners with Examples

image

A practical Python course for beginners with examples and exercises…. more

[ FEATURED READ]

Antifragile: Things That Gain from Disorder

image

Antifragile is a standalone book in Nassim Nicholas Taleb’s landmark Incerto series, an investigation of opacity, luck, uncertainty, probability, human error, risk, and decision-making in a world we don’t understand. The… more

[ TIPS & TRICKS OF THE WEEK]

Winter is coming, warm your Analytics Club
Yes and yes! As we are heading into winter what better way but to talk about our increasing dependence on data analytics to help with our decision making. Data and analytics driven decision making is rapidly sneaking its way into our core corporate DNA and we are not churning practice ground to test those models fast enough. Such snugly looking models have hidden nails which could induce unchartered pain if go unchecked. This is the right time to start thinking about putting Analytics Club[Data Analytics CoE] in your work place to help Lab out the best practices and provide test environment for those models.

[ DATA SCIENCE Q&A]

Q:Which kernels do you know? How to choose a kernel?
A: * Gaussian kernel
* Linear kernel
* Polynomial kernel
* Laplace kernel
* Esoteric kernels: string kernels, chi-square kernels
* If number of features is large (relative to number of observations): SVM with linear kernel ; e.g. text classification with lots of words, small training example
* If number of features is small, number of observations is intermediate: Gaussian kernel
* If number of features is small, number of observations is small: linear kernel

Source

[ VIDEO OF THE WEEK]

@RCKashyap @Cylance on State of Security & Technologist Mindset #FutureOfData #Podcast

 @RCKashyap @Cylance on State of Security & Technologist Mindset #FutureOfData #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

For every two degrees the temperature goes up, check-ins at ice cream shops go up by 2%. – Andrew Hogue, Foursquare

[ PODCAST OF THE WEEK]

@TimothyChou on World of #IOT & Its #Future Part 2 #FutureOfData #Podcast

 @TimothyChou on World of #IOT & Its #Future Part 2 #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Every person in the US tweeting three tweets per minute for 26,976 years.

Sourced from: Analytics.CLUB #WEB Newsletter

CMS Predictive Readmission Models ‘Not Very Good’

Researchers find that functional status, rather than comorbidity, is a better predictor of whether someone will be readmitted to the hospital.

The way the Centers for Medicare & Medicaid Services predicts readmissions is likely neither the most accurate nor the fairest, researchers at Harvard Medical School claim.

A study published in the May issue of the Journal of General Internal Medicine found that functional status, rather than comorbidities, was a better predictor of whether someone would be readmitted to the hospital.

“This raises a question of whether Medicare is really using the best predictors to really understand readmission,” as well as questions about how fairly hospitals are being financially penalized, says principal investigator Jeffrey Schneider, MD, medical director of the Trauma, Burn and Orthopedic Program at Spaulding Rehabilitation Hospital in Boston and assistant professor of physical medicine and rehabilitation at Harvard Medical School.

Trotter

Jeffrey Schneider, MD

Schneider points out that CMS fined more than 2,200 hospitals a total of $280 million in 2013 for excess 30-day hospital readmissions, so having accurate readmission models is critical.

But the ones CMS uses “are not very good predictive models, and they have relied heavily on simple demographic data like age and gender and comorbidities,” he says.

Moreover, “there’s mounting evidence that function is a good predictor of all sorts of hospital outcomes.”

The researchers conducted a retrospective study of 120,957 patients in the Uniform Data System for Medical Rehabilitation database who were admitted to inpatient rehabilitation facilities under the medically complex impairment group code between 2002 and 2011.

Schneider says they chose to study this “medical complex” population “because it is heterogeneous and we think well-represents a wide swath of patients who are in a hospital for medical reasons.”

“Rehabilitation hospitals routinely collect functional measures and that data is available in a large administrative database,” he says. The researchers measured functional status using the Functional Independence Measure (FIM), which looks at 18 tasks such as eating, dressing, bathing, toileting, grooming, and climbing stairs. Each of the 18 items is rated on a seven-point scale from completely dependent on someone else for help to totally independent.

FIM data is collected on a patient’s admission to a rehab facility—which is usually on the same day as their discharge from an acute care facility. “In that way it’s also a surrogate marker of their functional status when they left acute care,” he says.

Function or Comorbidity?

Researchers built models based on functional status and gender to predict readmission at three, seven, and 30 days, and compared them to three different models based on comorbidities and gender.

“We really just wanted to answer this question: If function was a better measure of readmission than comorbidity,” Schneider says. “We didn’t seek to build the best model.”

The researchers then determined the c-statistic—the measure of a model’s overall ability to predict an outcome, which ranges from 0.5 (chance) to 1 (perfect predictor)—of the models.

They found that the model with gender and function was significantly better at predicting readmissions, Schneider says.

Models based on function and gender for three-, seven-, and 30-day readmissions (c-statistics 0.691, 0.637, and 0.649, respectively) performed significantly better than even the best-performing model based on comorbidities and gender (c-statistics 0.572, 0.570, and 0.573, respectively).

Even adding comorbidities to the function-based models didn’t help much, creating c-statistic differences of only 0.013, 0.017, and 0.015 for 3-, 7-, and 30-day readmissions, respectively, for the best-performing model.

‘It’s So Intuitive’

Why is function a good predictor? Schneider says it may represent something else, such as the severity of a patient’s illness. Cancer patients, for instance, have a wide degree of functional statuses depending on how sick they are. In this way, “it’s so intuitive” that function would be a good predictor of readmissions, he says. If you can’t care for yourself, you’ll likely end up back in the hospital.

In addition, “comorbidity is a fixed variable,” Schneider says, but function is not. And since function is a better predictor of readmission, even at shorter time intervals, assessing a patient’s functional status and doing things to improve it could be a way of reducing preventable readmissions, especially the three- and seven-day readmissions.

“Acute care hospitals are not routinely collecting a functional measure of their patients,” Schneider says. He also points out that recent research on functional interventions—such as early mobilization in the ICU—in acute care hospitals is showing to improve patient outcomes.

Next Steps

“I think the next wave for hospitals… is [thinking about] how to make use of this information,” Schneider says, by piloting functional interventions and determining functional measures at discharge to help with risk-stratifying for readmissions.

On a larger scale, there’s also the policy perspective that CMS’s readmissions models aren’t as good as they could be. Schneider says he and his colleagues are conducting another, even larger study, using the same framework, but looking at but all patients in a rehab hospital, not only at medically complex ones. He says it hasn’t been published yet, but the findings will be pretty similar.

“I think it’s really worthwhile,” he says.

 

To read the original article on HealthLeaders Media, click here.

Source

Sep 13, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data interpretation  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> June 12, 2017 Health and Biotech analytics news roundup by pstein

>> 20 Best Practices for Customer Feedback Programs: Strategy and Governance by bobehayes

>> Anita Sarkeesian’s brave attempt to restore women equality in gaming by d3eksha

Wanna write? Click Here

[ NEWS BYTES]

>>
 New Research Report on Big Data Security Market, 2017-2027 – Latest Market Reports By Abhishek Budholiya (press release) (blog) Under  Big Data Security

>>
 Software-defined networking is turning concern about security in the cloud on its head – Help Net Security Under  Cloud Security

>>
 How Big Data Science and Analytics is the Lure for Businesses Today – Entrepreneur Under  Big Data Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Deep Learning Prerequisites: The Numpy Stack in Python

image

The Numpy, Scipy, Pandas, and Matplotlib stack: prep for deep learning, machine learning, and artificial intelligence… more

[ FEATURED READ]

Big Data: A Revolution That Will Transform How We Live, Work, and Think

image

“Illuminating and very timely . . . a fascinating — and sometimes alarming — survey of big data’s growing effect on just about everything: business, government, science and medicine, privacy, and even on the way we think… more

[ TIPS & TRICKS OF THE WEEK]

Finding a success in your data science ? Find a mentor
Yes, most of us dont feel a need but most of us really could use one. As most of data science professionals work in their own isolations, getting an unbiased perspective is not easy. Many times, it is also not easy to understand how the data science progression is going to be. Getting a network of mentors address these issues easily, it gives data professionals an outside perspective and unbiased ally. It’s extremely important for successful data science professionals to build a mentor network and use it through their success.

[ DATA SCIENCE Q&A]

Q:What is the difference between supervised learning and unsupervised learning? Give concrete examples
?

A: * Supervised learning: inferring a function from labeled training data
* Supervised learning: predictor measurements associated with a response measurement; we wish to fit a model that relates both for better understanding the relation between them (inference) or with the aim to accurately predicting the response for future observations (prediction)
* Supervised learning: support vector machines, neural networks, linear regression, logistic regression, extreme gradient boosting
* Supervised learning examples: predict the price of a house based on the are, size.; churn prediction; predict the relevance of search engine results.
* Unsupervised learning: inferring a function to describe hidden structure of unlabeled data
* Unsupervised learning: we lack a response variable that can supervise our analysis
* Unsupervised learning: clustering, principal component analysis, singular value decomposition; identify group of customers
* Unsupervised learning examples: find customer segments; image segmentation; classify US senators by their voting.

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData with Jon Gibs(@jonathangibs) @L2_Digital

 #BigData @AnalyticsWeek #FutureOfData with Jon Gibs(@jonathangibs) @L2_Digital

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Numbers have an important story to tell. They rely on you to give them a voice. – Stephen Few

[ PODCAST OF THE WEEK]

@BrianHaugli @The_Hanover ?on Building a #Leadership #Security #Mindset #FutureOfData #Podcast

 @BrianHaugli @The_Hanover ?on Building a #Leadership #Security #Mindset #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

In late 2011, IDC Digital Universe published a report indicating that some 1.8 zettabytes of data will be created that year.

Sourced from: Analytics.CLUB #WEB Newsletter

Wrapping my head around Big-data problem

Last week at a meetup in Boston, I was told to give my 2 cents on big-data with an analogy. Idea is to make the problem understandable even by a 12 year old, and that made me think. So, based on what all I have gathered, and seen from my experience, what exactly is big-data and is there a simple analogy to explain it to people.

Here is my 2 cents: wait for it.. Wait for it.. “Your Big-data problem is like your garbage problem”.

Garbage:

Things that we don’t know what to do with or how to use.

Things that we have not used for ages.

Things that we have used enough and found it is of no further use.

Someone else’s garbage that might have something that is of some use to you.

Big data includes:

Data sets that we capture but are not sure what to do or how to use.

Data-sets sitting out there that has not been monitored or used for ages.

Data-sets that comprise of information that we think are sufficient for helping us make business decisions.

And Data-sets captured by others that might be of some strategic relevance to us.

I have been talking with a couple of fortune 100 organization’s big-data team members and asked about their big-data initiatives. Findings were clear – that is, big-data is not clear enough. Let me try to explain what is going on:

Say, you have tons of garbage that you are concerned about and you want to make sure nothing useful is thrown out. Now, you are handed a shiny glove(tools) to help you help yourself by digging through your data. Is this picture looking right? This is what most of the companies are struggling with. Sure, they can deal with their garbage but it’s not their core competency. Your core job is not to filter through that garbage. This puts you at high risk of failing.

Very few smart companies are doing it right by calling experts to look at their big-data and help them with cleansing. This helps them do it more efficiently. You save on trial-error cost; you get to best practices first and adopt it in your core sooner.

So, it is important for companies to realize who best can serve as their Waste Management professionals. It won’t even hurt if a redundancy is infused to help get to best solutions faster and minimize failure risk.

Therefore, garbage is the best analogy I have found to explain big-data problem and how to resolve it. Surely, I am all ears to listen to better analogy that simplifies the meaning and sheds appropriate light into this issue.

Stay tuned, I will be posting a playbook for helping companies get started with resolving their big-data problem faster and cheaper.

 

Source by v1shal

Types of AI: From Reactive to Self-Aware [Infographics]

Artificial intelligence (AI) – intelligence exhibited by machines or software. It is also the name of the academic field which studies how to create computers and computer software that are capable of intelligent behaviour. There is an interesting infographics talking about types of AI that are available: Reaching, Limited Memory, Theory of Mind and Self-Aware.

source: Futurism

Source