Big Data Explained in Less Than 2 Minutes – To Absolutely Anyone

There are some things that are so big that they have implications for everyone, whether we want them to or not. Big Data is one of those concepts, and is completely transforming the way we do business and is impacting most other parts of our lives.

It’s such an important idea that everyone from your grandma to your CEO needs to have a basic understanding of what it is and why it’s important.

Source for cartoon: click here

What is Big Data?

“Big Data” means different things to different people and there isn’t, and probably never will be, a commonly agreed upon definition out there. But the phenomenon is real and it is producing benefits in so many different areas, so it makes sense for all of us to have a working understanding of the concept.

So here’s my quick and dirty definition:

The basic idea behind the phrase ‘Big Data’ is that everything we do is increasingly leaving a digital trace (or data), which we (and others) can use and analyse. Big Data therefore refers to that data being collected and our ability to make use of it.

I don’t love the term “big data” for a lot of reasons, but it seems we’re stuck with it. It’s basically a ‘stupid’ term for a very real phenomenon – the datafication of our world and our increasing ability to analyze data in a way that was never possible before.

Of course, data collection itself isn’t new. We as humans have been collecting and storing data since as far back as 18,000 BCE. What’s new are the recent technological advances in chip and sensor technology, the Internet, cloud computing, and our ability to store and analyze data that have changed the quantityof data we can collect.

Things that have been a part of everyday life for decades — shopping, listening to music, taking pictures, talking on the phone — now happen more and more wholly or in part in the digital realm, and therefore leave a trail of data.

The other big change is in the kind of data we can analyze. It used to be that data fit neatly into tables and spreadsheets, things like sales figures and wholesale prices and the number of customers that came through the door.

Now data analysts can also look at “unstructured” data like photos, tweets, emails, voice recordings and sensor data to find patterns.

How is it being used?

As with any leap forward in innovation, the tool can be used for good or nefarious purposes. Some people are concerned about privacy, as more and more details of our lives are being recorded and analyzed by businesses, agencies, and governments every day. Those concerns are real and not to be taken lightly, and I believe that best practices, rules, and regulations will evolve alongside the technology to protect individuals.

But the benefits of big data are very real, and truly remarkable.

Most people have some idea that companies are using big data to better understand and target customers. Using big data, retailers can predict what products will sell, telecom companies can predict if and when a customer might switch carriers, and car insurance companies understand how well their customers actually drive.

It’s also used to optimize business processes. Retailers are able to optimize their stock levels based on what’s trending on social media, what people are searching for on the web, or even weather forecasts. Supply chains can be optimized so that delivery drivers use less gas and reach customers faster.

But big data goes way beyond shopping and consumerism. Big data analytics enable us to find new cures and better understand and predict the spread of diseases. Police forces use big data tools to catch criminals and even predict criminal activity and credit card companies use big data analytics it to detect fraudulent transactions. A number of cities are even using big data analytics with the aim of turning themselves into Smart Cities, where a bus would know to wait for a delayed train and where traffic signals predict traffic volumes and operate to minimize jams.

Why is it so important?

The biggest reason big data is important to everyone is that it’s a trend that’s only going to grow.

As the tools to collect and analyze the data become less and less expensive and more and more accessible, we will develop more and more uses for it — everything from smart yoga mats to better healthcare tools and a more effective police force.

And, if you live in the modern world, it’s not something you can escape. Whether you’re all for the benefits big data can bring, or worried about Big Brother, it’s important to be aware of the phenomena and tuned in to how it’s affecting your daily life.

What are your biggest questions about big data? I’d love to hear them in the comments below — and they may inspire future posts to address them.

To read the full article on Data Science Central, click here.

Originally Posted at: Big Data Explained in Less Than 2 Minutes – To Absolutely Anyone

Big big love, how big data’s influencing the future of the online dating scene

Romance and big data have a lot more in common than you might think. Though the world of tech and data may seem an odd place to uncover love, both online dating and big data work to personalise what a person or brand has to offer, matching and targeting it uniquely to appeal to that one special individual who’ll want exactly what’s being advertised.

In many cases for both singles on the dating market and brands on the commercial one, achieving success – using big data to successfully reach the right individual or unique prospect – is the start of (hopefully) lifelong, trusted and better matched relationships, and could be the future for love and online dating.

Data set and match

A facilitator of the modern way of life, big data helps us on a daily basis in all kinds of areas, from retail to healthcare, finance and more. Data is not only relevant to advertisers and marketers however. With infinite uses, targeted information helps us live safer, healthier, more personalised lives – including our love lives – matching the perfect partner to the perfect person at the perfect time. After all, better optimised and analysed data means one in six of all US marriages now occur as a result of online dating.*

Photo courtesy of Pixabay
Photo courtesy of Pixabay

What makes dating data big data?

Defining exactly what makes data ‘big’ data can be confusing.

For example if people simply visit dating sites and input information, this does not constitute big data. Big data is defined by the conjunction of three things; volume, velocity and variability.

From the original computer punch card dating-match systems which first came out in the 60’s,** to the comprehensive online dating sites we have today, the process of using data to pair people has become increasingly streamlined and effective. Higher percentages of people are generating data of volume, variety and velocity online; on dating and social sites and through apps (according to the article “Can Big Data +  Big Dating = True Love?” dating app use is growing faster than all other apps combined). As a result, the data created is allowing matchmaking services to target more accurately than ever before.

Regardless of platform, big data infrastructures already support dating sites and apps**, and make the analysis and management of voluminous data sets (terabytes of information according to eHarmony) possible. In the future, big data will likely play a more active role towards enhancing match accuracy right from the start of data collation, just as in marketing; using real time information about people, their unique background, hobbies and more from a variety of sources in addition to self-inputted information, social profile and mobile app data currently used.

Analysing our secret desires and enhancing profile accuracy

The ability to use data from a varied reach of sources as dating service consumers opt-in, will likely be key in furthering the relationship between big data and dating.

On dating sites users currently input the explicit features they prefer and state what they want to know when looking for a partner, say; commitment views, age, hobbies etc. (Profile creation works on the same principal. But often, our actions don’t quite match our words. For example you may have stated (and think) you’re a fan of pure reggae, but your iTunes history may speak differently. What if in future, purchasing data, on an opt-in basis, could be used to match you more accurately with others with similar tastes?

Analytics, as an informational facilitator already currently uses behavioural matching to similarly enhance accuracy. Users may have stated, and think they prefer brunettes but might actually click on more profiles of redheads without realising. It’s here that data really advances the way online dating works; analysing what you’re actually looking at, and what you’re really looking for best match accuracy.

Helping the course of love run smooth

Ultimately, analysing just what makes people fall for each other is not an exact science, and compatibility on meeting is still the decider for many relationships. So while right now, data can’t quite predict results 100% accurately, it can give romance a helping hand. But give it time – and in a few years data may even be able to solve that.

 

*Data courtesy of Acxiom UK’s whitepaper ‘Searching For Balance In The Use Of Personal Data’

Other references

**Referenced from the Smithsonian.com article – “How Big Data Has Changed Dating”

http://www.smithsonianmag.com/ideas-innovations/How-Big-Data-Has-Ch…

http://readwrite.com/2013/02/14/big-data-big-dating-true-love#awesm…

http://gigaom.com/2011/02/11/okcupid-demystifies-dating-with-big-data/

http://www.zdnet.com/eharmony-translates-big-data-into-love-and-cas…

To read the original article on Data Science Central, click here.

Source

How the Pharmaceuticals Industry Uses Big Data

Like other sectors using data to transform – including the music industry, professional basketball, beverage manufacturers and even online match-makers – big pharma is into big data.

pharmacy

The pharmaceuticals industry collects massive amounts of data. Estimates are measured in petabytes – tens of billions of records of prescriptions and other transactions, hundreds of millions of patient records, hundreds of thousands of data sources.

How the pharmaceuticals industry uses big data is unlike other industries’ approach, however. Pharma companies use data to fail as soon as they can ­­­– that is, through clinical research trials, pharmas aim to figure out which drugs don’t work as soon possible, so they can focus on developing and marketing those that do.

For these companies, data holds the key to improving…well, everything. In the near term, data determines which drugs move through clinical trials and which populations may be most receptive to new medicines. Longer term, data is critical to fulfilling the promise of personalized medicine based on the genetic profiles of individual patients.

The field of bioinformatics largely reflects the emergence of big data within a biological and scientific context. In addition to bioinformatics, there is a huge opportunity to positively reinforce healthy behaviors through tracking patient outcomes.

 

Imagine what kind of brand loyalty pharmas could achieve if patients could easily access a critical health reading over time such as blood pressure. Seeing those numbers declining over time with the help of a medication could be a powerful message to share.

Interestingly, many stakeholders within the industry – ranging from marketing and innovation teams, to clinical and research staff – now categorize big data broadly in terms of “real-world” and “research” (or clinical) data, as highlighted in this report from Health Affairs.

Research data is collected during the trial process:

Clinical trial data is often collected for the specific purpose of obtaining regulatory approval for a new medicine, or a new indication for a medicine. Clinical trials are rigorously designed, often focused on a highly specific patient population, take significant time to complete, and can be very expensive to complete.

Real-world data, on the other hand is “any data that is not captured within the context of a clinical trial and is not explicitly intended for research purposes.”

Real-world data includes social media posts, which can be incredibly valuable for pharmaceutical companies. Drug companies can supplement clinical findings with the actual experiences of people taking the drugs.

Such “crowdsourced” information about medications can provide insights into surprising side effects or new uses of treatment. These new indications can often become billion-dollar businesses in their own right – as shown by the long history of “happy accidents” in drug development.

Of course, if you have good customer intelligence and know how to capture, manage and analyze a wide range of data, then maybe that’s not an accident. This is similar to what companies in many other industries face – from media and entertainment and telecommunications to financial services.

Pharmas must aggregate data – structured and unstructured – from multiple sources to get a clear view of who their consumers are and what they want. They can also test certain offerings and may discover new uses of existing products.

Undoubtedly, pharmaceuticals face very intense regulatory scrutiny – so matters of data ownership, security and confidentiality are tricky. But it’s interesting that an industry that succeeds largely by failing faster is increasing its customer intelligence through more channels than ever.

To read the original article on Infinitive, click here.

Source

Big Data Is No Longer Confined to the Big Business Playbook

Only 18 percent of small businesses and just more than half (57 percent) of mid-size companies use business intelligence and analytics solutions, according to market research firm The SMB Group.

What about the others?

Many smaller businesses are reluctant to invest in leading-edge technologies. Limited capital or the lack of the right staffperson might prompt even the most forward-thinking companies to avoid innovations or postpone such a move until they reach a certain revenue or profit goal.

It’s an erroneous notion among small business owners and decision-makers that big data is too complex or something only big companies can afford to try out. Even the name — the “big” in big data — can seem off-putting. But it’s not as tough to dive into big data as small companies might think and the payoff can be significant.

Advances in user interfaces, automation and cognitive computing are removing the barriers to adoption of big-data tools and they are now at costs that small businesses can afford. How does free sound?

Can you imagine the impact when a small-business owner is able to sort through volumes of internal and external data about his or her business and then lets any employee, in any role, to make insightful decisions and engage customers more effectively?

What if I told you that you don’t have to imagine, that tapping into critical data that could change the way your

Today, any employee can use analytics to make data-driven decisions that directly address his or her business problems without having to worry about the underlying technology or needing an in-house data scientist with specialty skills in analytics.

Solutions are now available (including Watson Analytics developed by my company, IBM) that are designed not only for data scientists and analysts but for every business professional who uses data.

There are extremely powerful tools that can help knowledge workers  find insightful perspectives and answer a whole host of questions they might have about their area of business using natural language, just like using a search engine, but far more meaningful.

This means smaller businesses can take advantage of their speed and customer proximity and, combined with new data insights, really be game changes.

IBM estimated in 2013 that with the rapid spread of mobile devices and the “Internet of Things,” the world is generating more than 2.5 billion gigabytes of data every single day. These vast sets of data are an organization’s most precious natural resource — whether that data is structured in databases or is the kind of information that comes from blog posts, customer-support chat sessions or even social networks like Twitter.

When analytics is applied to big data, an organization can change the way it makes decisions. Business processes improve, customer engagement becomes more personalized and new markets can be created as needs emerge.

A good example of this is Tacoma, Wash.-based Point Defiance Zoo & Aquarium, a client of IBM. On a daily basis, millions of data records are generated about visitors exhibit preferences, along with significant consumer feedback generated on social channels, such as Facebook.

The zoo used big-data analytics to uncover patterns and trends in its data to help drive its ticket sales and enhance visitor experiences. As a result, Point Defiance Zoo’s online ticket sales grew more than 700 percent in one year.

This is just one example of an organization’s using its data to drive decisions and dramatically increasing revenue — even thought it has fewer than 100 people and no data scientists on its payroll.

Small business owners can test out big-data analytics and see the benefits for themselves. The following steps are ways that managers can get started and reap the benefits:

 

1. Identify your challenges. 

Understand the opportunity that big data and analytics can present for your company. Set some goals whether to save on costs, increase the return on investment, growth and expansion.

2 Get to know your data. 

Start by looking at the data your organization is creating and understanding where it’s coming from, including from social networks, business activities and software applications for sales or marketing. Knowing what you have to work with is a critical step.

3. Identify the information that’s most useful. 

Based on the data that your organization is already generating, figure out which types will have the most impact on your business.

Consider these questions: Would mining customer sentiment on social networks help to improve product development and customer service? Can you use sales and marketing data to improve growth and revenue?

Focus on your customers. Historically, the main focus of IT has been on automating and driving cost savings in the back-end systems of record.

Today, the focus is increasingly shifting to systems of engagement. When diving into your data, think about  how to drive top-line revenue growth by using data to find new customers and partners and deliver  real-time value to them in unique and unexpected ways.

 

4. Explore. 

Choosing the right technology tailored for your organization’s needs will be crucial to your company’s big-data analytics success. There are free versions of powerful solutions available today that provide a good representation of features so you can receive a taste of what they can do. These features will often provide enough benefit to make a difference immediately.

5. Consider using the cloud.

The rise of the cloud is having a dramatic impact, making big-data analytics technologies within reach for small businesses and startups. By putting analytics in the cloud there’s minimal cost and infrastructure requirements. You can drive down costs and relay the resulting savings to product development and customer service while extracting critical insights for your business.

6. Tap the power of  peers.

Communities like StartupNation or Midsize Insider are ideal forums for investigating new solutions and posing questions. They are also a great way to identify local IT services companies that have a level of expertise in analytics technologies and can work with you to apply them to your particular business need.

Originaly posted via “Big Data Is No Longer Confined to the Big Business Playbook”

Source: Big Data Is No Longer Confined to the Big Business Playbook

#Compliance and #Privacy in #Health #Informatics by @BesaBauta

#Compliance and #Privacy in #Health #Informatics by @BesaBauta

In this podcast @BesaBauta from MeryFirst talks about the compliance and privacy challenges faced in hyper regulated industry. With her experience in health informatics, Besa shared some best practices and challenges that are faced by data science groups in health informatics and other similar groups in regulated space. This podcast is great for anyone looking to learn about data science compliance and privacy challenges.

Besa’s Recommended Read:
The Art Of War by Sun Tzu and Lionel Giles https://amzn.to/2Jx2PYm

Podcast Link:
iTunes: http://math.im/itunes
GooglePlay: http://math.im/gplay

Besa’s BIO:
Dr. Besa Bauta is the Chief Data Officer and Chief Compliance Officer for MercyFirst, a social service organization providing health and mental health services to children and adolescents in New York City. She oversees the Research, Evaluation, Analytics, and Compliance for Health (REACH) division, including data governance and security measures, analytics, risk mitigation, and policy initiatives.
She is also an Adjunct Assistant Professor at NYU, and previously worked as a Research Director for a USAID project in Afghanistan, and as the Senior Director of Research and Evaluation at the Center for Evidence-Based Implementation and Research (CEBIR). She holds a Ph.D. in implementation science with a focus on health services, an MPH in Global Health and an MSW. Her research has focused on health systems, mental health, and integration of technology to improve population-level outcomes.

About #Podcast:
#FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

Want to sponsor?
Email us @ info@analyticsweek.com

Keywords:
#FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Source: #Compliance and #Privacy in #Health #Informatics by @BesaBauta by v1shal

CMS Predictive Readmission Models ‘Not Very Good’

Researchers find that functional status, rather than comorbidity, is a better predictor of whether someone will be readmitted to the hospital.

The way the Centers for Medicare & Medicaid Services predicts readmissions is likely neither the most accurate nor the fairest, researchers at Harvard Medical School claim.

A study published in the May issue of the Journal of General Internal Medicine found that functional status, rather than comorbidities, was a better predictor of whether someone would be readmitted to the hospital.

“This raises a question of whether Medicare is really using the best predictors to really understand readmission,” as well as questions about how fairly hospitals are being financially penalized, says principal investigator Jeffrey Schneider, MD, medical director of the Trauma, Burn and Orthopedic Program at Spaulding Rehabilitation Hospital in Boston and assistant professor of physical medicine and rehabilitation at Harvard Medical School.

Trotter

Jeffrey Schneider, MD

Schneider points out that CMS fined more than 2,200 hospitals a total of $280 million in 2013 for excess 30-day hospital readmissions, so having accurate readmission models is critical.

But the ones CMS uses “are not very good predictive models, and they have relied heavily on simple demographic data like age and gender and comorbidities,” he says.

Moreover, “there’s mounting evidence that function is a good predictor of all sorts of hospital outcomes.”

The researchers conducted a retrospective study of 120,957 patients in the Uniform Data System for Medical Rehabilitation database who were admitted to inpatient rehabilitation facilities under the medically complex impairment group code between 2002 and 2011.

Schneider says they chose to study this “medical complex” population “because it is heterogeneous and we think well-represents a wide swath of patients who are in a hospital for medical reasons.”

“Rehabilitation hospitals routinely collect functional measures and that data is available in a large administrative database,” he says. The researchers measured functional status using the Functional Independence Measure (FIM), which looks at 18 tasks such as eating, dressing, bathing, toileting, grooming, and climbing stairs. Each of the 18 items is rated on a seven-point scale from completely dependent on someone else for help to totally independent.

FIM data is collected on a patient’s admission to a rehab facility—which is usually on the same day as their discharge from an acute care facility. “In that way it’s also a surrogate marker of their functional status when they left acute care,” he says.

Function or Comorbidity?

Researchers built models based on functional status and gender to predict readmission at three, seven, and 30 days, and compared them to three different models based on comorbidities and gender.

“We really just wanted to answer this question: If function was a better measure of readmission than comorbidity,” Schneider says. “We didn’t seek to build the best model.”

The researchers then determined the c-statistic—the measure of a model’s overall ability to predict an outcome, which ranges from 0.5 (chance) to 1 (perfect predictor)—of the models.

They found that the model with gender and function was significantly better at predicting readmissions, Schneider says.

Models based on function and gender for three-, seven-, and 30-day readmissions (c-statistics 0.691, 0.637, and 0.649, respectively) performed significantly better than even the best-performing model based on comorbidities and gender (c-statistics 0.572, 0.570, and 0.573, respectively).

Even adding comorbidities to the function-based models didn’t help much, creating c-statistic differences of only 0.013, 0.017, and 0.015 for 3-, 7-, and 30-day readmissions, respectively, for the best-performing model.

‘It’s So Intuitive’

Why is function a good predictor? Schneider says it may represent something else, such as the severity of a patient’s illness. Cancer patients, for instance, have a wide degree of functional statuses depending on how sick they are. In this way, “it’s so intuitive” that function would be a good predictor of readmissions, he says. If you can’t care for yourself, you’ll likely end up back in the hospital.

In addition, “comorbidity is a fixed variable,” Schneider says, but function is not. And since function is a better predictor of readmission, even at shorter time intervals, assessing a patient’s functional status and doing things to improve it could be a way of reducing preventable readmissions, especially the three- and seven-day readmissions.

“Acute care hospitals are not routinely collecting a functional measure of their patients,” Schneider says. He also points out that recent research on functional interventions—such as early mobilization in the ICU—in acute care hospitals is showing to improve patient outcomes.

Next Steps

“I think the next wave for hospitals… is [thinking about] how to make use of this information,” Schneider says, by piloting functional interventions and determining functional measures at discharge to help with risk-stratifying for readmissions.

On a larger scale, there’s also the policy perspective that CMS’s readmissions models aren’t as good as they could be. Schneider says he and his colleagues are conducting another, even larger study, using the same framework, but looking at but all patients in a rehab hospital, not only at medically complex ones. He says it hasn’t been published yet, but the findings will be pretty similar.

“I think it’s really worthwhile,” he says.

 

To read the original article on HealthLeaders Media, click here.

Source

Wrapping my head around Big-data problem

Last week at a meetup in Boston, I was told to give my 2 cents on big-data with an analogy. Idea is to make the problem understandable even by a 12 year old, and that made me think. So, based on what all I have gathered, and seen from my experience, what exactly is big-data and is there a simple analogy to explain it to people.

Here is my 2 cents: wait for it.. Wait for it.. “Your Big-data problem is like your garbage problem”.

Garbage:

Things that we don’t know what to do with or how to use.

Things that we have not used for ages.

Things that we have used enough and found it is of no further use.

Someone else’s garbage that might have something that is of some use to you.

Big data includes:

Data sets that we capture but are not sure what to do or how to use.

Data-sets sitting out there that has not been monitored or used for ages.

Data-sets that comprise of information that we think are sufficient for helping us make business decisions.

And Data-sets captured by others that might be of some strategic relevance to us.

I have been talking with a couple of fortune 100 organization’s big-data team members and asked about their big-data initiatives. Findings were clear – that is, big-data is not clear enough. Let me try to explain what is going on:

Say, you have tons of garbage that you are concerned about and you want to make sure nothing useful is thrown out. Now, you are handed a shiny glove(tools) to help you help yourself by digging through your data. Is this picture looking right? This is what most of the companies are struggling with. Sure, they can deal with their garbage but it’s not their core competency. Your core job is not to filter through that garbage. This puts you at high risk of failing.

Very few smart companies are doing it right by calling experts to look at their big-data and help them with cleansing. This helps them do it more efficiently. You save on trial-error cost; you get to best practices first and adopt it in your core sooner.

So, it is important for companies to realize who best can serve as their Waste Management professionals. It won’t even hurt if a redundancy is infused to help get to best solutions faster and minimize failure risk.

Therefore, garbage is the best analogy I have found to explain big-data problem and how to resolve it. Surely, I am all ears to listen to better analogy that simplifies the meaning and sheds appropriate light into this issue.

Stay tuned, I will be posting a playbook for helping companies get started with resolving their big-data problem faster and cheaper.

 

Source by v1shal

Types of AI: From Reactive to Self-Aware [Infographics]

Artificial intelligence (AI) – intelligence exhibited by machines or software. It is also the name of the academic field which studies how to create computers and computer software that are capable of intelligent behaviour. There is an interesting infographics talking about types of AI that are available: Reaching, Limited Memory, Theory of Mind and Self-Aware.

source: Futurism

Source