Nov 15, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Tour of Accounting  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> 6 Big Data Analytics Use Cases for Healthcare IT by analyticsweekpick

>> Sisense Hunch™ – Leadership Through Radical Innovation by analyticsweek

>> Next-generation supply & demand forecasting: How machine learning is helping retailers to save millions by analyticsweekpick

Wanna write? Click Here

[ NEWS BYTES]

>>
 Data Is The Foundation For Artificial Intelligence And Machine Learning – Forbes Under  Machine Learning

>>
 How to Get Into “Internet of Things” Investments – Banyan Hill Publishing Under  Internet Of Things

>>
 Beyond Big Data: The extreme data economy – Networks Asia Under  Big Data

More NEWS ? Click Here

[ FEATURED COURSE]

Python for Beginners with Examples

image

A practical Python course for beginners with examples and exercises…. more

[ FEATURED READ]

The Future of the Professions: How Technology Will Transform the Work of Human Experts

image

This book predicts the decline of today’s professions and describes the people and systems that will replace them. In an Internet society, according to Richard Susskind and Daniel Susskind, we will neither need nor want … more

[ TIPS & TRICKS OF THE WEEK]

Data aids, not replace judgement
Data is a tool and means to help build a consensus to facilitate human decision-making but not replace it. Analysis converts data into information, information via context leads to insight. Insights lead to decision making which ultimately leads to outcomes that brings value. So, data is just the start, context and intuition plays a role.

[ DATA SCIENCE Q&A]

Q:Explain selection bias (with regard to a dataset, not variable selection). Why is it important? How can data management procedures such as missing data handling make it worse?
A: * Selection of individuals, groups or data for analysis in such a way that proper randomization is not achieved
Types:
– Sampling bias: systematic error due to a non-random sample of a population causing some members to be less likely to be included than others
– Time interval: a trial may terminated early at an extreme value (ethical reasons), but the extreme value is likely to be reached by the variable with the largest variance, even if all the variables have similar means
– Data: “cherry picking”, when specific subsets of the data are chosen to support a conclusion (citing examples of plane crashes as evidence of airline flight being unsafe, while the far more common example of flights that complete safely)
– Studies: performing experiments and reporting only the most favorable results
– Can lead to unaccurate or even erroneous conclusions
– Statistical methods can generally not overcome it

Why data handling make it worse?
– Example: individuals who know or suspect that they are HIV positive are less likely to participate in HIV surveys
– Missing data handling will increase this effect as it’s based on most HIV negative
-Prevalence estimates will be unaccurate

Source

[ VIDEO OF THE WEEK]

Dave Ulrich (@dave_ulrich) talks about role / responsibility of HR in #FutureOfWork #JobsOfFuture #Podcast

 Dave Ulrich (@dave_ulrich) talks about role / responsibility of HR in #FutureOfWork #JobsOfFuture #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Without big data, you are blind and deaf and in the middle of a freeway. – Geoffrey Moore

[ PODCAST OF THE WEEK]

Discussing #InfoSec with @travturn, @hrbrmstr(@rapid7) @thebearconomist(@boozallen) @yaxa_io

 Discussing #InfoSec with @travturn, @hrbrmstr(@rapid7) @thebearconomist(@boozallen) @yaxa_io

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Retailers who leverage the full power of big data could increase their operating margins by as much as 60%.

Sourced from: Analytics.CLUB #WEB Newsletter

Hadoop demand falls as other big data tech rises

graph-36929_1280-100585494-primary.idge

Hadoop makes all the big data noise. Too bad it’s not also getting the big data deployments.

Indeed, though Hadoop has often served as shorthand for big data, this increasingly seems like a mistake. According to a new Gartner report, “despite continuing enthusiasm for the big data phenomenon, demand for Hadoop specifically is not accelerating.”

According to the survey, most enterprises have “no plans at this time” to invest in Hadoop and a mere 26 percent have either deployed or are piloting Hadoop. They are, however, actively embracing other big data technologies.

‘Fairly anemic’ interest in Hadoop

For a variety of reasons, with a lack of Hadoop skills as the biggest challenge (57 percent), enterprises aren’t falling in love with Hadoop.

Indeed, as Gartner analyst Merv Adrian suggests in a new Gartner report (“Survey Analysis: Hadoop Adoption Drivers and Challenges“):

With such large incidence of organizations with no plans or already on their Hadoop journey, future demand for Hadoop looks fairly anemic over at least the next 24 months. Moreover, the lack of near-term plans for Hadoop adoption suggest that, despite continuing enthusiasm for the big data phenomenon, demand for Hadoop specifically is not accelerating.

How anemic? Think 54 percent with zero plans to use Hadoop, plus another 20 percent that at best will get to experimenting with Hadoop in the next year:

Gartner HadoopGartner

Google’s Android for Work promises serious security, but how does it stack up against Apple’s iOS and

READ NOW

This doesn’t bode well for Hadoop’s biggest vendors. After all, as Gartner analyst Nick Huedecker posits, “Hadoop [is] overkill for the problems the business[es surveyed] face, implying the opportunity costs of implementing Hadoop [are] too high relative to the expected benefit.”

Selling the future of Hadoop

By some measures, this shortfall of interest hasn’t yet caught up with the top two Hadoop vendors, Cloudera and Hortonworks.

Cloudera, after all, will reportedly clear nearly $200 million in revenue in 2015, with a valuation of $5 billion, according to Manhattan Venture Partners. While the company is nowhere near profitability, it’s not struggling to grow and will roughly double revenue this year.

Hortonworks, for its part, just nailed a strong quarter. Annual billings grew 99 percent to $28.1 million, even as revenue exploded 167 percent to $22.8 million. To reach these numbers, Hortonworks added 105 new customers, up from 99 new customers in the previous quarter.

Still, there are signs that the hype is fading.

Hortonworks, despite beating analyst expectations handily last quarter, continues to fall short of the $1 billion-plus valuation it held at its last round of private funding. As I’ve argued, the company will struggle to justify a billion-dollar price tag due to its pure-play open source business model.

But according to the Gartner data, it may also struggle due to “fairly anemic” demand for Hadoop.

There’s a big mitigating factor. Hadoop vendors will almost surely languish — unless they’re willing to embrace adjacent big data technologies that complement Hadoop. As it happens, both leaders already have.

For example, even as Apache Spark has eaten into MapReduce interest, both companies have climbed aboard the Spark train.

But more is needed. Because big data is much more than Hadoop and its ecosystem.

For example, though the media has equated big data with Hadoop for years, data scientists have not. As Silicon Angle uncovered back in 2012 from its analysis of Twitter conversations, when data professionals talk about big data, they actually talked about NoSQL technologies like MongoDB as much or more than Hadoop:

MongoDB vs. HadoopSilicon Angle

Today those same data professionals are likely to be using MongoDB and Cassandra, both among the world’s top 10 most popular databases, rather than Hbase, which is the database of choice for Cloudera and Hortonworks, but ranks a distant #15 in terms of overall popularity, according to DB-Engines.

Buying an ecosystem

Let’s look at Gartner’s data again, this time comparing big data adoption and Hadoop adoption:

Gartner Hadoop vs. big dataGartner
A significant percentage of the delta between these two almost certainly derives from other, highly popular big data technologies such as MongoDB, Cassandra, Apache Storm, etc. They don’t fit into the current Hadoop ecosystem, but Cloudera and Hortonworks need to find ways to embrace them, or risk running out of Hadoop runway.

Nor is that the only risk.

As Aerospike executive and former Wall Street analystPeter Goldmacher told me, a major problem for Hortonworks and Cloudera is that both are spending too much money to court customers. (As strong as Hortonworks’ billings growth was, it doubled its loss on the way to that growth as it spent heavily to grow sales.)

While these companies currently have a lead in terms of distribution Goldmacher warns that Oracle or another incumbent could acquire one of them and thereby largely lobotomize the other because of its superior claim on CIO wallets and broad-based suite offerings.

Neither Cloudera nor Hortonworks can offer that suite.

But what they can do, Goldmacher goes on, is expand their own big data footprint. For example, if Cloudera were to use its $4-to-5 billion valuation to acquire a NoSQL vendor, “All of a sudden other NoSQL vendors and Hortonworks are screwed because Cloudera would have the makings of a complete architecture.”

In other words, to survive long term, Hadoop’s dominant vendors need to move beyond Hadoop — and fast.

Originally posted by Matt Asay at: http://www.infoworld.com/article/2922720/big-data/hadoop-demand-falls-as-other-big-data-tech-rises.html

Source: Hadoop demand falls as other big data tech rises

3 Big Data Stocks Worth Considering

Big data is a trend that I’ve followed for some time now, and even though it’s still in its early stages, I expect it to continue to be a game changer as we move further into the future.

smartphone tech sector news 3 Big Data Stocks Worth ConsideringAs our Internet footprint has grown, all the data we create — from credit cards to passwords and pictures uploaded on Instagram — has to be managed somehow.

This data is too vast to be entered into traditional relational databases, so more powerful tools are needed for companies to utilize the information to analyze customers’ behavior and predict what they may do in the future.

Big data makes it all possible, and as a result is one of the dominant themes for technology growth investing. We’ve invested in several of these types of companies in my GameChangers service over the years, one of which we’ll talk more about in just a moment.

First, let’s start with two of the biggest and best big data names out there. They’re among the best pure plays, and while I’m not sure the time is quite right to invest in either right now, they are both garnering some buzz in the tech world.

Big Data Stocks: Splunk (SPLK)

Splunk185 3 Big Data Stocks Worth ConsideringThe first is Splunk (SPLK). Splunk’s flagship product is Splunk Enterprise, which at its core is a proprietary machine data engine that enables dynamic creation on the fly. Users can then run queries on data without having to understand the structure of the information prior to collection and indexing.

Faster, streamlined processes mean more efficient (and more profitable) businesses.

While Splunk is very small in terms of revenues, with January 2015 fiscal year sales of just $451 million, it is growing rapidly, and I’m keeping an eye on the name as it may present a strong opportunity down the road.

However, I do not want to overpay for it. Splunk brings effective technology to the table that is gaining market acceptance, and has strong security software partners with its recent entry into security analytics. At the right price, the stock could also be a takeover candidate for a larger IT company looking to enhance its Big Data presence.

Big Data Stocks: Tableau Software (DATA)

TableauSoftware185 3 Big Data Stocks Worth ConsideringAnother name on my radar is Tableau Software (DATA), which performs similar functions as Splunk’s. Its primary product, VizQL, translates drag-and-drop actions into data queries. In this way, the company puts data directly in the hands of decision makers, without first having to go through technical specialists.

In fact, the company believes all employees, no matter what their rank in the company, can use their product, leading to the democratization of data.

DATA is also growing rapidly, even faster than Splunk. Revenues were up 78% in 2014, and 75% in the first quarter of 2015, including license revenue growth of more than 70%. That rate is expected to slow somewhat, with revenues for all of 2015 estimated to increase to a still strong 50%.

Tableau stock is also very expensive, trading at 12X expected 2015 revenues of $618 million and close to 300X projected EPS of 40 cents for the year. DATA is a little risky to buy at current levels, but it is a name to keep an eye on in any pullback.

Big Data Stocks: Red Hat (RHT)

red hat rht stock logo 185 3 Big Data Stocks Worth ConsideringThe company we made money on earlier this year in my GameChangers service isRed Hat (RHT). We booked a 15% profit in just a few months after it popped 11% on fourth-quarter earnings.

Red Hat is the world’s largest leading provider of open-source solutions, providing software to 90% of Fortune 500 companies. Some of RHT’s customers include well-known names like Sprint (S), Adobe Systems (ADBE) and Cigna Corporation (CI).

Management’s goal is to become the undisputed leader of enterprise cloud computing, and it sees its popular Linux operating system as a way to the top. If RHT is successful — as I expect it will be — Red Hat should have a lengthy period of expanded growth as corporations increasingly move into the cloud.

Red Hat’s operating results had always clearly demonstrated that its solutions are gaining greater acceptance in IT departments, as revenues had more doubled in the five years between 2009 and 2014 from $748 million to $1.53 billion. I had expected to see the strong sales growth continue throughout 2015, and it did. As I mentioned, impressive fiscal fourth-quarter results sent the shares 11% higher.

I recommended my subscribers sell their stake in the company at the end of March because I believed any further near-term upside was limited. Since then, shares have traded mostly between $75 and $80. It is now at the very top of that range and may be on the verge of breaking above it after the company reported fiscal first-quarter results last night.

Although orders were a little slow, RHT beat estimates on both the top and bottom lines in the first quarter. Earnings of 44 cents per share were up 29% quarter-over-quarter, besting estimates on the Street for earnings of 41 cents. Revenue climbed 14% to $481 million, while analysts had been expecting $472.6 million.

At this point, RHT is now back in uncharted territory, climbing to a new 52-week high earlier today. This is a company with plenty of growth opportunities ahead, and while growth may slow a bit in the near term following the stock’s impressive climb so far this year, RHT stands to gain as corporation continue to adopt additional cloud technologies.

To read the original article on InvestorPlace, click here.

Originally Posted at: 3 Big Data Stocks Worth Considering by analyticsweekpick

Nov 08, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data security  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Oracle zeroes in on Hadoop data with analytics tool by analyticsweekpick

>> 7 Lessons From Apple To Small Business by v1shal

>> The Business of Data by analyticsweekpick

Wanna write? Click Here

[ NEWS BYTES]

>>
 Can artificial intelligence help stop religious violence? – BBC News Under  Artificial Intelligence

>>
 How to Leverage True Edge Flexibility and Overcome Operational Challenges – Data Center Frontier (blog) Under  Data Center

>>
 Big Data Analytics in Healthcare Market Global 2018: Sales, Market Size, Market Benefits, Upcoming Developments … – Alter Times Under  Prescriptive Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Statistical Thinking and Data Analysis

image

This course is an introduction to statistical data analysis. Topics are chosen from applied probability, sampling, estimation, hypothesis testing, linear regression, analysis of variance, categorical data analysis, and n… more

[ FEATURED READ]

Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython

image

Python for Data Analysis is concerned with the nuts and bolts of manipulating, processing, cleaning, and crunching data in Python. It is also a practical, modern introduction to scientific computing in Python, tailored f… more

[ TIPS & TRICKS OF THE WEEK]

Data Have Meaning
We live in a Big Data world in which everything is quantified. While the emphasis of Big Data has been focused on distinguishing the three characteristics of data (the infamous three Vs), we need to be cognizant of the fact that data have meaning. That is, the numbers in your data represent something of interest, an outcome that is important to your business. The meaning of those numbers is about the veracity of your data.

[ DATA SCIENCE Q&A]

Q:Is it better to design robust or accurate algorithms?
A: A. The ultimate goal is to design systems with good generalization capacity, that is, systems that correctly identify patterns in data instances not seen before
B. The generalization performance of a learning system strongly depends on the complexity of the model assumed
C. If the model is too simple, the system can only capture the actual data regularities in a rough manner. In this case, the system poor generalization properties and is said to suffer from underfitting
D. By contrast, when the model is too complex, the system can identify accidental patterns in the training data that need not be present in the test set. These spurious patterns can be the result of random fluctuations or of measurement errors during the data collection process. In this case, the generalization capacity of the learning system is also poor. The learning system is said to be affected by overfitting
E. Spurious patterns, which are only present by accident in the data, tend to have complex forms. This is the idea behind the principle of Occam’s razor for avoiding overfitting: simpler models are preferred if more complex models do not significantly improve the quality of the description for the observations
Quick response: Occam’s Razor. It depends on the learning task. Choose the right balance
F. Ensemble learning can help balancing bias/variance (several weak learners together = strong learner)
Source

[ VIDEO OF THE WEEK]

Solving #FutureOfOrgs with #Detonate mindset (by @steven_goldbach & @geofftuff) #FutureOfData #Podcast

 Solving #FutureOfOrgs with #Detonate mindset (by @steven_goldbach & @geofftuff) #FutureOfData #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

War is 90% information. – Napoleon Bonaparte

[ PODCAST OF THE WEEK]

@AlexWG on Unwrapping Intelligence in #ArtificialIntelligence #FutureOfData #Podcast

 @AlexWG on Unwrapping Intelligence in #ArtificialIntelligence #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

In the developed economies of Europe, government administrators could save more than €100 billion ($149 billion) in operational efficiency improvements alone by using big data, not including using big data to reduce fraud and errors and boost the collection of tax revenues.

Sourced from: Analytics.CLUB #WEB Newsletter

Evaluating Hospital Quality using Patient Experience, Health Outcomes and Process of Care Measures

Patient experience (PX) has become an important topic for US hospitals. The Centers for Medicare & Medicaid Services (CMS) will be using patient feedback about their care as part of their reimbursement plan for acute care hospitals (see Hospital Value-Based Purchasing Program). Not surprisingly, hospitals are focusing on improving the patient experience to ensure they receive the maximum of their incentive payments. Additionally, US hospitals track other types of metrics (e.g., process of care and mortality rates) as measures of quality of care.

Given that hospitals have a variety of metrics at their disposal, it would be interesting to understand how these different metrics are related with each other. Do hospitals that receive higher PX ratings (e.g., more satisfied patients) also have better scores on other metrics (lower mortality rates, better process of care measures) than hospitals with lower PX ratings? In this week’s post, I will use the following hospital quality metrics:

  1. Patient Experience
  2. Health Outcomes (mortality rates, re-admission rates)
  3. Process of Care

I will briefly cover each of these metrics below.

Table 1. Descriptive Statistics for PX, Health Outcomes and Process of Care Metrics for US Hospitals (acute care hospitals only)

1. Patient Experience

Patient experience (PX) reflects the patients’ perceptions about their recent inpatient experience. PX is collected by a survey known as HCAHPS (Hospital Consumer Assessment of Healthcare Providers and Systems). HCAHPS (pronounced “H-caps“) is a national, standardized survey of hospital patients and was developed by a partnership of public and private organizations and was created to publicly report the patient’s perspective of hospital care.

The survey asks a random sample of recently discharged patients about important aspects of their hospital experience. The data set includes patient survey results for over 3800 US hospitals on ten measures of patients’ perspectives of care (e.g., nurse communication, pain well controlled). I combined two general questions (Overall hospital rating and recommend) to create a patient advocacy metric. Thus, a total of 9 PX metrics were used. Across all 9 metrics, hospital scores can range from 0 (bad) to 100 (good). You can see the PX measures for different US hospital here.

2. Process of Care

Process of care measures show, in percentage form or as a rate, how often a health care provider gives recommended care; that is, the treatment known to give the best results for most patients with a particular condition. The process of care metric is based on medical information from patient records that reflects the rate or percentage across 12 procedures related to surgical care.  Some of these procedures are related to antibiotics being given/stopped at the right times and treatments to prevent blood clots.  These percentages were translated into scores that ranged from 0 (worse) to 100 (best).  Higher scores indicate that the hospital has a higher rate of following best practices in surgical care. Details of how these metrics were calculated appear below the map.

I calculated an overall Process of Care Metric by averaging each of the 12 process of care scores. The process of care metric was used because it has good measurement properties (internal consistency was .75) and, thus reflects a good overall measure of process of care. You can see the process of care measures for different US hospital here.

3. Health Outcomes

Measures that tell what happened after patients with certain conditions received hospital care are called “Outcome Measures.” We use two general types of outcome measures: 1) 30-day Mortality Rate and 2) 30-day Readmission Rate. The 30-day risk-standardized mortality and 30-day risk-standardized readmission measures for heart attack, heart failure, and pneumonia are produced from Medicare claims and enrollment data using sophisticated statistical modeling techniques that adjust for patient-level risk factors and account for the clustering of patients within hospitals.

The death rates focus on whether patients died within 30 days of their hospitalization. The readmission rates focus on whether patients were hospitalized again within 30 days.

Three mortality rate and readmission rate measures were included in the healthcare dataset for each hospital. These were:

  1. 30-Day Mortality Rate / Readmission Rate from Heart Attack
  2. 30-Day Mortality Rate / Readmission Rate from Heart Failure
  3. 30-Day Mortality Rate / Readmission Rate from Pneumonia

Mortality/Readmission rate is measured in units of 1000 patients. So, if a hospital has a heart attack mortality rate of 15, that means that for every 1000 heart attack patients, 15 of them die get readmitted. You can see the health outcome measures for different US hospital here.

Table 2. Correlations of PX metrics with Health Outcome and Process of Care Metrics for US Hospitals (acute care hospitals only).

Results

The three types of metrics (e.g., PX, Health Outcomes, Process of Care) were housed in separate databases on the data.medicare.gov site. As explained elsewhere in my post on Big Data, I linked these three data sets together by hospital name. Basically, I federated the necessary metrics from their respective database and combined them into a single data set.

Descriptive statistics for each variable are located in Table 1. The correlations of each of the PX measures with each of the Health Outcome and Process of Care Measures is located in Table 2. As you can see, the correlations of PX with other hospital metrics is very low, suggesting that PX measures are assessing something quite different than the Health Outcome Measures and Process of Care Measures.

Patient Loyalty and Health Outcomes and Process of Care

Patient loyalty/advocacy (as measured by the Patient Advocacy Index) is logically correlated with the other measures (except for Death Rate from Heart Failure). Hospitals that have higher patient loyalty ratings have lower death rates, readmission rates and higher levels of process of care. The degree of relationship, however, is quite small (the percent of variance explained by patient advocacy is only 3%).

Patient Experience and Health Outcomes and Process of Care

Patient experience (PX) shows a complex relationship with health outcome and process of care measures. It appears that hospitals that have higher PX ratings also report higher death rates. However, as expected, hospitals that have higher PX ratings report lower readmission rates. Although statistically significant, all of the correlations of PX metrics with other hospitals metrics are low.

The PX dimension that had the highest correlation with readmission rates and process of care measures was “Given Information about my Recovery upon discharge“.  Hospitals who received high scores on this dimensions also experienced lower readmission rates and higher process of care scores.

Summary

Hospitals are tracking different types of quality metrics, metrics being used to evaluate each hospital’s performance. Three different metrics for US hospitals were examined to understand how well they are related to each other (there are many other metrics on which hospitals can be compared). Results show that the patient experience and patient loyalty are only weakly related to other hospital metrics, suggesting that improving the patient experience will have little impact on other hospital measures (health outcomes, process of care).

 

Source: Evaluating Hospital Quality using Patient Experience, Health Outcomes and Process of Care Measures by bobehayes

The Modern Day Software Engineer: Less Coding And More Creating

Last week, I asked the CEO of a startup company in Toronto, “How do you define a software engineer?”.

She replied, “Someone who makes sh*t work”;

This used to be all you needed. If your online web app starts to crash, hire a software engineer to fix the problem.

If your app needs a new feature, hire a software engineer to build it (AKA weave together lines of code to make sh*t work).

We need to stop referring to an engineer as an ‘engineer’. CEOs of startups need to stop saying ‘we need more engineers’.

The modern day ‘engineer’ cannot simply be an engineer. They need to be a renaissance person; a person who is well versed in multiple aspects of life.

Your job as a software engineer cannot be to simply ‘write code’. That’s like saying a Canadian lawyer’s job is to speak English.

English and code are means of doing the real job: Produce value that society wants.

So, to start pumping out code to produce a new feature simply because it’s on the ‘new features list’ is mindless. You can’t treat code as a means itself.

The modern day engineer (MDE) needs to understand the modern day world. The MDE cannot simply sit in a room alone and write code.

The MDE needs to understand the social and business consequences of creating and releasing a product.

The MDE cannot leave it up to the CEOs and marketers and business buffs to come up with the ‘why’ for a new product.

Everyone should be involved in the ‘why’, as long they are in the ‘now’.

New frameworks that emphasis less code and more productivity are being released every day, almost.

We are slowly moving towards a future where writing code will be so easy that it would be unimpressive to be someone who only writes code.

In the future Google Translate will probably add JavaScript and Python (and other programming languages) to their list of languages. Now all you have to do is type in English and get a JavaScript translation. In fact, who needs a programming language like JavaScript or Python when you can now use English to directly tell a computer what to do?

Consequently, code becomes a language that can be spoken by all. So, to write good code, you need to be more than an ‘engineer’. You need to be a renaissance person and a person who understands the wishes, wants, emotions and needs of the modern day world.

Today (October 22nd, 2015), I was at a TD Canada Trust networking event designed for ‘tech professionals’ in Waterloo ON, Canada. The purpose of this event was to demo new ‘tech’ (the word has so many meanings nowadays) products to young students and professionals. The banking industry is in the process of a full makeover, if you didn’t know. One of the TD guys, let’s call him Julio, was telling me a little summary of what TD was (and is) trying to do with its recruitment process.

Let me give you the gist of what he said:

“We have business professionals (business analysts, etc) whose job is to understand the 5 W’s of the product. Also, we have engineers/developers/programmers who just write code. What we are now looking for is someone who can engage with others as well as do the technical stuff.”

His words were wise, but I was not sure if he fully understood the implications of what he was talking about. This is the direction we have been heading for quite some time now, but it’s about time we kick things up a notch.

Expect more of this to come.
Expect hybrid roles.
Expect it become easier and easier to write code.
Expect to be valued for your social awareness paired with your ability to make sh*t work.

Perhaps software tech is at the beginning of a new Renaissance era.

*View the original post here*

Twitter: @nikhil_says

Email: nikhil38@gmail.com

Source by nbhaskar

Nov 01, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Productivity  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Adoption of Analytics in Business Increasing but ROI Remains Elusive [INFOGRAPHIC] by bobehayes

>> Perfecting Sensor Data Analytics with Cyberforaging by jelaniharper

>> CMS Predictive Readmission Models ‘Not Very Good’ by analyticsweekpick

Wanna write? Click Here

[ NEWS BYTES]

>>
 Second year of ARI Mentor Program builds on success of inaugural challenge – Australasian Leisure Management (press release) Under  Sales Analytics

>>
 Sonja Quale | Confidio – Maryland Daily Record Under  Sales Analytics

>>
 CWH investigates augmented reality, Internet of Things – Australian Journal of Pharmacy (blog) Under  Internet Of Things

More NEWS ? Click Here

[ FEATURED COURSE]

Lean Analytics Workshop – Alistair Croll and Ben Yoskovitz

image

Use data to build a better startup faster in partnership with Geckoboard… more

[ FEATURED READ]

On Intelligence

image

Jeff Hawkins, the man who created the PalmPilot, Treo smart phone, and other handheld devices, has reshaped our relationship to computers. Now he stands ready to revolutionize both neuroscience and computing in one strok… more

[ TIPS & TRICKS OF THE WEEK]

Fix the Culture, spread awareness to get awareness
Adoption of analytics tools and capabilities has not yet caught up to industry standards. Talent has always been the bottleneck towards achieving the comparative enterprise adoption. One of the primal reason is lack of understanding and knowledge within the stakeholders. To facilitate wider adoption, data analytics leaders, users, and community members needs to step up to create awareness within the organization. An aware organization goes a long way in helping get quick buy-ins and better funding which ultimately leads to faster adoption. So be the voice that you want to hear from leadership.

[ DATA SCIENCE Q&A]

Q:Explain likely differences between administrative datasets and datasets gathered from experimental studies. What are likely problems encountered with administrative data? How do experimental methods help alleviate these problems? What problem do they bring?
A: Advantages:
– Cost
– Large coverage of population
– Captures individuals who may not respond to surveys
– Regularly updated, allow consistent time-series to be built-up

Disadvantages:
– Restricted to data collected for administrative purposes (limited to administrative definitions. For instance: incomes of a married couple, not individuals, which can be more useful)
– Lack of researcher control over content
– Missing or erroneous entries
– Quality issues (addresses may not be updated or a postal code is provided only)
– Data privacy issues
– Underdeveloped theories and methods (sampling methods…)

Source

[ VIDEO OF THE WEEK]

#BigData #BigOpportunity in Big #HR by @MarcRind #JobsOfFuture #Podcast

 #BigData #BigOpportunity in Big #HR by @MarcRind #JobsOfFuture #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Numbers have an important story to tell. They rely on you to give them a voice. – Stephen Few

[ PODCAST OF THE WEEK]

Scott Harrison (@SRHarrisonJD) on leading the learning organization #JobsOfFuture #Podcast

 Scott Harrison (@SRHarrisonJD) on leading the learning organization #JobsOfFuture #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

The data volumes are exploding, more data has been created in the past two years than in the entire previous history of the human race.

Sourced from: Analytics.CLUB #WEB Newsletter

7 Things to Look before Picking Your Data Discovery Vendor

7 Things to Look before Picking Your Data Discovery Vendor
7 Things to Look before Picking Your Data Discovery Vendor

Data Discovery Tools: also called Data Visualization Tool, sometimes also referred to as Data Analytics tools. These tools are talk of the town and reasonably hot today. With all the hype about Big-data, companies are going dandy on planning for their big-data and are on a lookout for great tools to recruit in their big-data strategy. One thing to note here is that we don’t change our data discovery tool vendors every day. Once we get into our system, it will eventually become part of our Big-data DNA Solution. So, we should put much thought into what goes in picking data discovery/visualization/analysis tool.

So, what would you do and what would you consider important while picking up your data discovery tool vendor? I interviewed a couple of data scientists and data managers in a bunch of companies and prioritized the findings with top 7 things to consider before you go out picking your big-data discovery tools vendor.

Here are my 7 thoughts not particularly in that order:

1. Not a major jump with what I already have: Yes, learning a new system takes time, effort, resources and cycles. So, the faster the ramp up or shorter the learning curve, the better it will be. Sure, many tools will be eons apart with what you are used to, but that should not deter you from evaluating it as well. Just go a bit high on the tools with minimum learning curve. One thing to check here is that you should be able to do routine things with this new tool, almost the same way as you used to doing without it.

2. Helps me to do more with my data: There will be several moments where you will realize the tools could do a lot more than what you are used to or what you are capable of. This is another check to include in your equation. More feature set, capabilities within the discovery tool. The more it will let you do stuff with your data, the better it will be. You should be able to do more fun and investigative stuffs with your data more closely and at various dimensions that it will ultimately help with better understanding of the data.

3. Integrate well with my big-data: Yes, first things first, you need a tool that at least has the capability to talk to your data. It should be able to mingle well with your data layouts, structures without having to do too much time consuming steps. A good tool will always make it almost seamless to integrate your data. If you have to jump ropes, cut corners to make data integration happen, maybe you are looking at a wrong tool for help. So, get your data integration team to work and make sure data integration is a no issue with the tool that you are evaluating to buy.

4. Friendly with outside data I might include as well: Many-times, it is not only about your data. Sometime you need to access and evaluate external data and find their relationship with your data. Those used case must be checked as well. How easy it is to include external structured and unstructured data. The bigger the product integration roadmap for the vendor, the easier it will be for the tool to connect with other external resources. Your preferred tools should be able to integrate seamlessly with data sets involved in your industry. Social data, industry data, other third party application data are some example. So, ask your vendor on how their tools mingle well with other outside data sets.

5. Scalability of the platform: Sure, the tool you are evaluating could do wonders with data and has a sweet feature set, but will it scale well as you grow. This is important consideration just like any good corporate tool considerations. If your business will grow, so will it’s data and associated dependencies, but will your discovery tool grow with it? This is one finding which must be part of your evaluation score for any tool you are planning to recruit for your big-data discovery need. So, get on call with technical teams from vendor and grill them to understand how their tool will grow with growing data. You don’t want to partner with a tool that will break in future as your business grows.

6. Vendor’s vision is in-line with our vision: Above 5 measures are pretty much standard and defines basic functionalities on what a good tool should entail. It’s also not a big surprise that most of the tools will have some or the other of their own interpretation of the above 5 points. Now one key thing to notice on strategic front will be their vision for the company and the tool. Tool can do you good today, it has boatload of features, it is friendly with your and outside data. But will it grow with a strategy consistent with yours. Yes, no matter how weird it sounds, it is one of the realities that you should consider. A vendor only handling health care will have some impact to companies using the tools for insurance sector. A tool that will handle only clever visualization piece might have impact on companies expecting some automation as part of the core tool evolution. So, it is important to understand the product vision of the tool company, that will help you understand if it will comply with your business value tomorrow or day-after or in foreseeable future.

7. Awesome import / export tools to keep my data/analysis free: Another important thing to note is stickiness with the products. A good product design should not keep customer sticky by keeping their data hostage. A good tool should bank on it’s features, usability and data driven design. So, data and it’s knowledge should be easily importable/exportable to most common standards (csv, xml etc.). This will keep the tool up with integrating it with other third party service that might emerge with emerging market. This should be a consideration as it will play an instrumental role in moving your data around as you start dealing with new formats and new reporting tools that are leveraging your data discovery findings.

I am certain by the end of 7 steps you must have thought about several more examples that one could keep in mind before picking a good data discovery tool. Feel free to email me your findings and I will keep adding it to the list.

Source: 7 Things to Look before Picking Your Data Discovery Vendor by v1shal

Oct 25, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Ethics  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Betting the Enterprise on Data with Cloud-Based Disaster Recovery and Backups by jelaniharper

>> Free Comparison of 5 Leading Product Analytics Platforms by analyticsweek

>> What does the 5-point/star mobile app rating tell us about user loyalty? by bobehayes

Wanna write? Click Here

[ NEWS BYTES]

>>
 Global Digital Media Market report forecasts revenue growth at the global, regional, and country levels and provides … – County Telegram Under  Sales Analytics

>>
 House Passes Slew Of Homeland, Cyber Security Bills – Defense Daily Network Under  cyber security

>>
 Supply Chain Leaders Say Top Priority is Responding to Customers Faster – Material Handling & Logistics Under  Prescriptive Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Tackle Real Data Challenges

image

Learn scalable data management, evaluate big data technologies, and design effective visualizations…. more

[ FEATURED READ]

The Misbehavior of Markets: A Fractal View of Financial Turbulence

image

Mathematical superstar and inventor of fractal geometry, Benoit Mandelbrot, has spent the past forty years studying the underlying mathematics of space and natural patterns. What many of his followers don’t realize is th… more

[ TIPS & TRICKS OF THE WEEK]

Data aids, not replace judgement
Data is a tool and means to help build a consensus to facilitate human decision-making but not replace it. Analysis converts data into information, information via context leads to insight. Insights lead to decision making which ultimately leads to outcomes that brings value. So, data is just the start, context and intuition plays a role.

[ DATA SCIENCE Q&A]

Q:You are compiling a report for user content uploaded every month and notice a spike in uploads in October. In particular, a spike in picture uploads. What might you think is the cause of this, and how would you test it?
A: * Halloween pictures?
* Look at uploads in countries that don’t observe Halloween as a sort of counter-factual analysis
* Compare uploads mean in October and uploads means with September: hypothesis testing

Source

[ VIDEO OF THE WEEK]

@EdwardBoudrot / @Optum on #DesignThinking & #DataDriven Products #FutureOfData #Podcast

 @EdwardBoudrot / @Optum on #DesignThinking & #DataDriven Products #FutureOfData #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Hiding within those mounds of data is knowledge that could change the life of a patient, or change the world. – Atul Butte, Stanford

[ PODCAST OF THE WEEK]

@chrisbishop on futurist's lens on #JobsOfFuture #FutureofWork #JobsOfFuture #Podcast

 @chrisbishop on futurist’s lens on #JobsOfFuture #FutureofWork #JobsOfFuture #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Every second we create new data. For example, we perform 40,000 search queries every second (on Google alone), which makes it 3.5 searches per day and 1.2 trillion searches per year.In Aug 2015, over 1 billion people used Facebook FB +0.54% in a single day.

Sourced from: Analytics.CLUB #WEB Newsletter