Hadoop demand falls as other big data tech rises

graph-36929_1280-100585494-primary.idge

Hadoop makes all the big data noise. Too bad it’s not also getting the big data deployments.

Indeed, though Hadoop has often served as shorthand for big data, this increasingly seems like a mistake. According to a new Gartner report, “despite continuing enthusiasm for the big data phenomenon, demand for Hadoop specifically is not accelerating.”

According to the survey, most enterprises have “no plans at this time” to invest in Hadoop and a mere 26 percent have either deployed or are piloting Hadoop. They are, however, actively embracing other big data technologies.

‘Fairly anemic’ interest in Hadoop

For a variety of reasons, with a lack of Hadoop skills as the biggest challenge (57 percent), enterprises aren’t falling in love with Hadoop.

Indeed, as Gartner analyst Merv Adrian suggests in a new Gartner report (“Survey Analysis: Hadoop Adoption Drivers and Challenges“):

With such large incidence of organizations with no plans or already on their Hadoop journey, future demand for Hadoop looks fairly anemic over at least the next 24 months. Moreover, the lack of near-term plans for Hadoop adoption suggest that, despite continuing enthusiasm for the big data phenomenon, demand for Hadoop specifically is not accelerating.

How anemic? Think 54 percent with zero plans to use Hadoop, plus another 20 percent that at best will get to experimenting with Hadoop in the next year:

Gartner HadoopGartner

Google’s Android for Work promises serious security, but how does it stack up against Apple’s iOS and

READ NOW

This doesn’t bode well for Hadoop’s biggest vendors. After all, as Gartner analyst Nick Huedecker posits, “Hadoop [is] overkill for the problems the business[es surveyed] face, implying the opportunity costs of implementing Hadoop [are] too high relative to the expected benefit.”

Selling the future of Hadoop

By some measures, this shortfall of interest hasn’t yet caught up with the top two Hadoop vendors, Cloudera and Hortonworks.

Cloudera, after all, will reportedly clear nearly $200 million in revenue in 2015, with a valuation of $5 billion, according to Manhattan Venture Partners. While the company is nowhere near profitability, it’s not struggling to grow and will roughly double revenue this year.

Hortonworks, for its part, just nailed a strong quarter. Annual billings grew 99 percent to $28.1 million, even as revenue exploded 167 percent to $22.8 million. To reach these numbers, Hortonworks added 105 new customers, up from 99 new customers in the previous quarter.

Still, there are signs that the hype is fading.

Hortonworks, despite beating analyst expectations handily last quarter, continues to fall short of the $1 billion-plus valuation it held at its last round of private funding. As I’ve argued, the company will struggle to justify a billion-dollar price tag due to its pure-play open source business model.

But according to the Gartner data, it may also struggle due to “fairly anemic” demand for Hadoop.

There’s a big mitigating factor. Hadoop vendors will almost surely languish — unless they’re willing to embrace adjacent big data technologies that complement Hadoop. As it happens, both leaders already have.

For example, even as Apache Spark has eaten into MapReduce interest, both companies have climbed aboard the Spark train.

But more is needed. Because big data is much more than Hadoop and its ecosystem.

For example, though the media has equated big data with Hadoop for years, data scientists have not. As Silicon Angle uncovered back in 2012 from its analysis of Twitter conversations, when data professionals talk about big data, they actually talked about NoSQL technologies like MongoDB as much or more than Hadoop:

MongoDB vs. HadoopSilicon Angle

Today those same data professionals are likely to be using MongoDB and Cassandra, both among the world’s top 10 most popular databases, rather than Hbase, which is the database of choice for Cloudera and Hortonworks, but ranks a distant #15 in terms of overall popularity, according to DB-Engines.

Buying an ecosystem

Let’s look at Gartner’s data again, this time comparing big data adoption and Hadoop adoption:

Gartner Hadoop vs. big dataGartner
A significant percentage of the delta between these two almost certainly derives from other, highly popular big data technologies such as MongoDB, Cassandra, Apache Storm, etc. They don’t fit into the current Hadoop ecosystem, but Cloudera and Hortonworks need to find ways to embrace them, or risk running out of Hadoop runway.

Nor is that the only risk.

As Aerospike executive and former Wall Street analystPeter Goldmacher told me, a major problem for Hortonworks and Cloudera is that both are spending too much money to court customers. (As strong as Hortonworks’ billings growth was, it doubled its loss on the way to that growth as it spent heavily to grow sales.)

While these companies currently have a lead in terms of distribution Goldmacher warns that Oracle or another incumbent could acquire one of them and thereby largely lobotomize the other because of its superior claim on CIO wallets and broad-based suite offerings.

Neither Cloudera nor Hortonworks can offer that suite.

But what they can do, Goldmacher goes on, is expand their own big data footprint. For example, if Cloudera were to use its $4-to-5 billion valuation to acquire a NoSQL vendor, “All of a sudden other NoSQL vendors and Hortonworks are screwed because Cloudera would have the makings of a complete architecture.”

In other words, to survive long term, Hadoop’s dominant vendors need to move beyond Hadoop — and fast.

Originally posted by Matt Asay at: http://www.infoworld.com/article/2922720/big-data/hadoop-demand-falls-as-other-big-data-tech-rises.html

Source: Hadoop demand falls as other big data tech rises

3 Big Data Stocks Worth Considering

Big data is a trend that I’ve followed for some time now, and even though it’s still in its early stages, I expect it to continue to be a game changer as we move further into the future.

smartphone tech sector news 3 Big Data Stocks Worth ConsideringAs our Internet footprint has grown, all the data we create — from credit cards to passwords and pictures uploaded on Instagram — has to be managed somehow.

This data is too vast to be entered into traditional relational databases, so more powerful tools are needed for companies to utilize the information to analyze customers’ behavior and predict what they may do in the future.

Big data makes it all possible, and as a result is one of the dominant themes for technology growth investing. We’ve invested in several of these types of companies in my GameChangers service over the years, one of which we’ll talk more about in just a moment.

First, let’s start with two of the biggest and best big data names out there. They’re among the best pure plays, and while I’m not sure the time is quite right to invest in either right now, they are both garnering some buzz in the tech world.

Big Data Stocks: Splunk (SPLK)

Splunk185 3 Big Data Stocks Worth ConsideringThe first is Splunk (SPLK). Splunk’s flagship product is Splunk Enterprise, which at its core is a proprietary machine data engine that enables dynamic creation on the fly. Users can then run queries on data without having to understand the structure of the information prior to collection and indexing.

Faster, streamlined processes mean more efficient (and more profitable) businesses.

While Splunk is very small in terms of revenues, with January 2015 fiscal year sales of just $451 million, it is growing rapidly, and I’m keeping an eye on the name as it may present a strong opportunity down the road.

However, I do not want to overpay for it. Splunk brings effective technology to the table that is gaining market acceptance, and has strong security software partners with its recent entry into security analytics. At the right price, the stock could also be a takeover candidate for a larger IT company looking to enhance its Big Data presence.

Big Data Stocks: Tableau Software (DATA)

TableauSoftware185 3 Big Data Stocks Worth ConsideringAnother name on my radar is Tableau Software (DATA), which performs similar functions as Splunk’s. Its primary product, VizQL, translates drag-and-drop actions into data queries. In this way, the company puts data directly in the hands of decision makers, without first having to go through technical specialists.

In fact, the company believes all employees, no matter what their rank in the company, can use their product, leading to the democratization of data.

DATA is also growing rapidly, even faster than Splunk. Revenues were up 78% in 2014, and 75% in the first quarter of 2015, including license revenue growth of more than 70%. That rate is expected to slow somewhat, with revenues for all of 2015 estimated to increase to a still strong 50%.

Tableau stock is also very expensive, trading at 12X expected 2015 revenues of $618 million and close to 300X projected EPS of 40 cents for the year. DATA is a little risky to buy at current levels, but it is a name to keep an eye on in any pullback.

Big Data Stocks: Red Hat (RHT)

red hat rht stock logo 185 3 Big Data Stocks Worth ConsideringThe company we made money on earlier this year in my GameChangers service isRed Hat (RHT). We booked a 15% profit in just a few months after it popped 11% on fourth-quarter earnings.

Red Hat is the world’s largest leading provider of open-source solutions, providing software to 90% of Fortune 500 companies. Some of RHT’s customers include well-known names like Sprint (S), Adobe Systems (ADBE) and Cigna Corporation (CI).

Management’s goal is to become the undisputed leader of enterprise cloud computing, and it sees its popular Linux operating system as a way to the top. If RHT is successful — as I expect it will be — Red Hat should have a lengthy period of expanded growth as corporations increasingly move into the cloud.

Red Hat’s operating results had always clearly demonstrated that its solutions are gaining greater acceptance in IT departments, as revenues had more doubled in the five years between 2009 and 2014 from $748 million to $1.53 billion. I had expected to see the strong sales growth continue throughout 2015, and it did. As I mentioned, impressive fiscal fourth-quarter results sent the shares 11% higher.

I recommended my subscribers sell their stake in the company at the end of March because I believed any further near-term upside was limited. Since then, shares have traded mostly between $75 and $80. It is now at the very top of that range and may be on the verge of breaking above it after the company reported fiscal first-quarter results last night.

Although orders were a little slow, RHT beat estimates on both the top and bottom lines in the first quarter. Earnings of 44 cents per share were up 29% quarter-over-quarter, besting estimates on the Street for earnings of 41 cents. Revenue climbed 14% to $481 million, while analysts had been expecting $472.6 million.

At this point, RHT is now back in uncharted territory, climbing to a new 52-week high earlier today. This is a company with plenty of growth opportunities ahead, and while growth may slow a bit in the near term following the stock’s impressive climb so far this year, RHT stands to gain as corporation continue to adopt additional cloud technologies.

To read the original article on InvestorPlace, click here.

Originally Posted at: 3 Big Data Stocks Worth Considering by analyticsweekpick

Evaluating Hospital Quality using Patient Experience, Health Outcomes and Process of Care Measures

Patient experience (PX) has become an important topic for US hospitals. The Centers for Medicare & Medicaid Services (CMS) will be using patient feedback about their care as part of their reimbursement plan for acute care hospitals (see Hospital Value-Based Purchasing Program). Not surprisingly, hospitals are focusing on improving the patient experience to ensure they receive the maximum of their incentive payments. Additionally, US hospitals track other types of metrics (e.g., process of care and mortality rates) as measures of quality of care.

Given that hospitals have a variety of metrics at their disposal, it would be interesting to understand how these different metrics are related with each other. Do hospitals that receive higher PX ratings (e.g., more satisfied patients) also have better scores on other metrics (lower mortality rates, better process of care measures) than hospitals with lower PX ratings? In this week’s post, I will use the following hospital quality metrics:

  1. Patient Experience
  2. Health Outcomes (mortality rates, re-admission rates)
  3. Process of Care

I will briefly cover each of these metrics below.

Table 1. Descriptive Statistics for PX, Health Outcomes and Process of Care Metrics for US Hospitals (acute care hospitals only)

1. Patient Experience

Patient experience (PX) reflects the patients’ perceptions about their recent inpatient experience. PX is collected by a survey known as HCAHPS (Hospital Consumer Assessment of Healthcare Providers and Systems). HCAHPS (pronounced “H-caps“) is a national, standardized survey of hospital patients and was developed by a partnership of public and private organizations and was created to publicly report the patient’s perspective of hospital care.

The survey asks a random sample of recently discharged patients about important aspects of their hospital experience. The data set includes patient survey results for over 3800 US hospitals on ten measures of patients’ perspectives of care (e.g., nurse communication, pain well controlled). I combined two general questions (Overall hospital rating and recommend) to create a patient advocacy metric. Thus, a total of 9 PX metrics were used. Across all 9 metrics, hospital scores can range from 0 (bad) to 100 (good). You can see the PX measures for different US hospital here.

2. Process of Care

Process of care measures show, in percentage form or as a rate, how often a health care provider gives recommended care; that is, the treatment known to give the best results for most patients with a particular condition. The process of care metric is based on medical information from patient records that reflects the rate or percentage across 12 procedures related to surgical care.  Some of these procedures are related to antibiotics being given/stopped at the right times and treatments to prevent blood clots.  These percentages were translated into scores that ranged from 0 (worse) to 100 (best).  Higher scores indicate that the hospital has a higher rate of following best practices in surgical care. Details of how these metrics were calculated appear below the map.

I calculated an overall Process of Care Metric by averaging each of the 12 process of care scores. The process of care metric was used because it has good measurement properties (internal consistency was .75) and, thus reflects a good overall measure of process of care. You can see the process of care measures for different US hospital here.

3. Health Outcomes

Measures that tell what happened after patients with certain conditions received hospital care are called “Outcome Measures.” We use two general types of outcome measures: 1) 30-day Mortality Rate and 2) 30-day Readmission Rate. The 30-day risk-standardized mortality and 30-day risk-standardized readmission measures for heart attack, heart failure, and pneumonia are produced from Medicare claims and enrollment data using sophisticated statistical modeling techniques that adjust for patient-level risk factors and account for the clustering of patients within hospitals.

The death rates focus on whether patients died within 30 days of their hospitalization. The readmission rates focus on whether patients were hospitalized again within 30 days.

Three mortality rate and readmission rate measures were included in the healthcare dataset for each hospital. These were:

  1. 30-Day Mortality Rate / Readmission Rate from Heart Attack
  2. 30-Day Mortality Rate / Readmission Rate from Heart Failure
  3. 30-Day Mortality Rate / Readmission Rate from Pneumonia

Mortality/Readmission rate is measured in units of 1000 patients. So, if a hospital has a heart attack mortality rate of 15, that means that for every 1000 heart attack patients, 15 of them die get readmitted. You can see the health outcome measures for different US hospital here.

Table 2. Correlations of PX metrics with Health Outcome and Process of Care Metrics for US Hospitals (acute care hospitals only).

Results

The three types of metrics (e.g., PX, Health Outcomes, Process of Care) were housed in separate databases on the data.medicare.gov site. As explained elsewhere in my post on Big Data, I linked these three data sets together by hospital name. Basically, I federated the necessary metrics from their respective database and combined them into a single data set.

Descriptive statistics for each variable are located in Table 1. The correlations of each of the PX measures with each of the Health Outcome and Process of Care Measures is located in Table 2. As you can see, the correlations of PX with other hospital metrics is very low, suggesting that PX measures are assessing something quite different than the Health Outcome Measures and Process of Care Measures.

Patient Loyalty and Health Outcomes and Process of Care

Patient loyalty/advocacy (as measured by the Patient Advocacy Index) is logically correlated with the other measures (except for Death Rate from Heart Failure). Hospitals that have higher patient loyalty ratings have lower death rates, readmission rates and higher levels of process of care. The degree of relationship, however, is quite small (the percent of variance explained by patient advocacy is only 3%).

Patient Experience and Health Outcomes and Process of Care

Patient experience (PX) shows a complex relationship with health outcome and process of care measures. It appears that hospitals that have higher PX ratings also report higher death rates. However, as expected, hospitals that have higher PX ratings report lower readmission rates. Although statistically significant, all of the correlations of PX metrics with other hospitals metrics are low.

The PX dimension that had the highest correlation with readmission rates and process of care measures was “Given Information about my Recovery upon discharge“.  Hospitals who received high scores on this dimensions also experienced lower readmission rates and higher process of care scores.

Summary

Hospitals are tracking different types of quality metrics, metrics being used to evaluate each hospital’s performance. Three different metrics for US hospitals were examined to understand how well they are related to each other (there are many other metrics on which hospitals can be compared). Results show that the patient experience and patient loyalty are only weakly related to other hospital metrics, suggesting that improving the patient experience will have little impact on other hospital measures (health outcomes, process of care).

 

Source: Evaluating Hospital Quality using Patient Experience, Health Outcomes and Process of Care Measures by bobehayes

The Modern Day Software Engineer: Less Coding And More Creating

Last week, I asked the CEO of a startup company in Toronto, “How do you define a software engineer?”.

She replied, “Someone who makes sh*t work”;

This used to be all you needed. If your online web app starts to crash, hire a software engineer to fix the problem.

If your app needs a new feature, hire a software engineer to build it (AKA weave together lines of code to make sh*t work).

We need to stop referring to an engineer as an ‘engineer’. CEOs of startups need to stop saying ‘we need more engineers’.

The modern day ‘engineer’ cannot simply be an engineer. They need to be a renaissance person; a person who is well versed in multiple aspects of life.

Your job as a software engineer cannot be to simply ‘write code’. That’s like saying a Canadian lawyer’s job is to speak English.

English and code are means of doing the real job: Produce value that society wants.

So, to start pumping out code to produce a new feature simply because it’s on the ‘new features list’ is mindless. You can’t treat code as a means itself.

The modern day engineer (MDE) needs to understand the modern day world. The MDE cannot simply sit in a room alone and write code.

The MDE needs to understand the social and business consequences of creating and releasing a product.

The MDE cannot leave it up to the CEOs and marketers and business buffs to come up with the ‘why’ for a new product.

Everyone should be involved in the ‘why’, as long they are in the ‘now’.

New frameworks that emphasis less code and more productivity are being released every day, almost.

We are slowly moving towards a future where writing code will be so easy that it would be unimpressive to be someone who only writes code.

In the future Google Translate will probably add JavaScript and Python (and other programming languages) to their list of languages. Now all you have to do is type in English and get a JavaScript translation. In fact, who needs a programming language like JavaScript or Python when you can now use English to directly tell a computer what to do?

Consequently, code becomes a language that can be spoken by all. So, to write good code, you need to be more than an ‘engineer’. You need to be a renaissance person and a person who understands the wishes, wants, emotions and needs of the modern day world.

Today (October 22nd, 2015), I was at a TD Canada Trust networking event designed for ‘tech professionals’ in Waterloo ON, Canada. The purpose of this event was to demo new ‘tech’ (the word has so many meanings nowadays) products to young students and professionals. The banking industry is in the process of a full makeover, if you didn’t know. One of the TD guys, let’s call him Julio, was telling me a little summary of what TD was (and is) trying to do with its recruitment process.

Let me give you the gist of what he said:

“We have business professionals (business analysts, etc) whose job is to understand the 5 W’s of the product. Also, we have engineers/developers/programmers who just write code. What we are now looking for is someone who can engage with others as well as do the technical stuff.”

His words were wise, but I was not sure if he fully understood the implications of what he was talking about. This is the direction we have been heading for quite some time now, but it’s about time we kick things up a notch.

Expect more of this to come.
Expect hybrid roles.
Expect it become easier and easier to write code.
Expect to be valued for your social awareness paired with your ability to make sh*t work.

Perhaps software tech is at the beginning of a new Renaissance era.

*View the original post here*

Twitter: @nikhil_says

Email: nikhil38@gmail.com

Source by nbhaskar

7 Things to Look before Picking Your Data Discovery Vendor

7 Things to Look before Picking Your Data Discovery Vendor
7 Things to Look before Picking Your Data Discovery Vendor

Data Discovery Tools: also called Data Visualization Tool, sometimes also referred to as Data Analytics tools. These tools are talk of the town and reasonably hot today. With all the hype about Big-data, companies are going dandy on planning for their big-data and are on a lookout for great tools to recruit in their big-data strategy. One thing to note here is that we don’t change our data discovery tool vendors every day. Once we get into our system, it will eventually become part of our Big-data DNA Solution. So, we should put much thought into what goes in picking data discovery/visualization/analysis tool.

So, what would you do and what would you consider important while picking up your data discovery tool vendor? I interviewed a couple of data scientists and data managers in a bunch of companies and prioritized the findings with top 7 things to consider before you go out picking your big-data discovery tools vendor.

Here are my 7 thoughts not particularly in that order:

1. Not a major jump with what I already have: Yes, learning a new system takes time, effort, resources and cycles. So, the faster the ramp up or shorter the learning curve, the better it will be. Sure, many tools will be eons apart with what you are used to, but that should not deter you from evaluating it as well. Just go a bit high on the tools with minimum learning curve. One thing to check here is that you should be able to do routine things with this new tool, almost the same way as you used to doing without it.

2. Helps me to do more with my data: There will be several moments where you will realize the tools could do a lot more than what you are used to or what you are capable of. This is another check to include in your equation. More feature set, capabilities within the discovery tool. The more it will let you do stuff with your data, the better it will be. You should be able to do more fun and investigative stuffs with your data more closely and at various dimensions that it will ultimately help with better understanding of the data.

3. Integrate well with my big-data: Yes, first things first, you need a tool that at least has the capability to talk to your data. It should be able to mingle well with your data layouts, structures without having to do too much time consuming steps. A good tool will always make it almost seamless to integrate your data. If you have to jump ropes, cut corners to make data integration happen, maybe you are looking at a wrong tool for help. So, get your data integration team to work and make sure data integration is a no issue with the tool that you are evaluating to buy.

4. Friendly with outside data I might include as well: Many-times, it is not only about your data. Sometime you need to access and evaluate external data and find their relationship with your data. Those used case must be checked as well. How easy it is to include external structured and unstructured data. The bigger the product integration roadmap for the vendor, the easier it will be for the tool to connect with other external resources. Your preferred tools should be able to integrate seamlessly with data sets involved in your industry. Social data, industry data, other third party application data are some example. So, ask your vendor on how their tools mingle well with other outside data sets.

5. Scalability of the platform: Sure, the tool you are evaluating could do wonders with data and has a sweet feature set, but will it scale well as you grow. This is important consideration just like any good corporate tool considerations. If your business will grow, so will it’s data and associated dependencies, but will your discovery tool grow with it? This is one finding which must be part of your evaluation score for any tool you are planning to recruit for your big-data discovery need. So, get on call with technical teams from vendor and grill them to understand how their tool will grow with growing data. You don’t want to partner with a tool that will break in future as your business grows.

6. Vendor’s vision is in-line with our vision: Above 5 measures are pretty much standard and defines basic functionalities on what a good tool should entail. It’s also not a big surprise that most of the tools will have some or the other of their own interpretation of the above 5 points. Now one key thing to notice on strategic front will be their vision for the company and the tool. Tool can do you good today, it has boatload of features, it is friendly with your and outside data. But will it grow with a strategy consistent with yours. Yes, no matter how weird it sounds, it is one of the realities that you should consider. A vendor only handling health care will have some impact to companies using the tools for insurance sector. A tool that will handle only clever visualization piece might have impact on companies expecting some automation as part of the core tool evolution. So, it is important to understand the product vision of the tool company, that will help you understand if it will comply with your business value tomorrow or day-after or in foreseeable future.

7. Awesome import / export tools to keep my data/analysis free: Another important thing to note is stickiness with the products. A good product design should not keep customer sticky by keeping their data hostage. A good tool should bank on it’s features, usability and data driven design. So, data and it’s knowledge should be easily importable/exportable to most common standards (csv, xml etc.). This will keep the tool up with integrating it with other third party service that might emerge with emerging market. This should be a consideration as it will play an instrumental role in moving your data around as you start dealing with new formats and new reporting tools that are leveraging your data discovery findings.

I am certain by the end of 7 steps you must have thought about several more examples that one could keep in mind before picking a good data discovery tool. Feel free to email me your findings and I will keep adding it to the list.

Source: 7 Things to Look before Picking Your Data Discovery Vendor by v1shal

How are hybrid clouds taking over the hosting environment in 2018?

The days of choosing between public clouds and private clouds are over. Now is the time to choose hybrid clouds that offer the best features of both worlds with neat little price tags and lucrative perks. In the last couple of years, SMBs and start-ups have chosen hybrid cloud technology over private hosting for its cost-effective and resourceful nature. The new generation clouds come with flexible infrastructure and customizable security solutions.

Hybrid clouds aim at the perfect blend of private cloud solutions security and public cloud solution costs. This enables the client websites to remain in secured environments and enjoy SaaS and IaaS facilities. Hybrid cloud solutions have the power and the resources to support the data explosion. In a generation of big data, most IT companies and web solution companies are looking for platforms that can provide them holistic data backup and management facilities. Check out what Upper Saddle River NJ Digital Marketing Agency has to say about the pros of hybrid cloud hosting.

Automation of load balancing

Storage planning for big data is a tremendous driving force behind the increasing demand for hybrid clouds. Most service providers offer scalable storage solutions with complete security for their client websites. This, in turn, helps the new businesses accommodate flexible workloads. Automation, analytics and data backup, everything can run on a demand-driven basis for most hybrid cloud hosting providers. This type of hosting provides responsive data loan balancing that is software-based. Therefore, it can instantly increase or decrease as per demand.

Increased competitive edge and utility-based costs

Enterprises that choose hybrid clouds have reported a decrease in operational and hosting costs over time. The resources these platforms provide often help these companies to expand their horizons and explore new markets. They enjoy better speed and connectivity during peak hours. Automation of cloud resources has helped to speed up has given websites competitive edge over their contemporaries on shared clouds.

Most companies that opt for hybrid hosting solutions report a sharp decrease in operating costs and an increase in customer satisfaction. Almost 56% of all entrepreneurs, who currently use cloud technology services for hosting report seeing a competitive advantage as a result of their choice. They also report a much higher ROI as compared to the private cloud users and shared hosting users.

Reliable services

New businesses need to garner a trustworthy image among the target customers and clients. For this, they need a reliable hosting solution that can offer then next to nil downtime. This should be true even during a disaster. Traditional website hosting called for HDD backups and SSD backups, which were susceptible to natural disasters or human-made crises. Hybrid hosting solutions offer complete cloud integration. The implementation of SaaS and IaaS to your current business operations will allow you to replicate all critical data in a different location. This kind of complete backup solution provides data insurance against all kinds of disasters.

Physical security and cloud security

Security concerns are always present, and currently, they are on the rise thanks to the very real and recent threats ransomware have posed on numerous websites, customers, and regular internet users. Cloud hosting services from hybrid clouds provide enhanced security since the providers store the physical servers within the physical data centers. They enjoy the protection the facility implements to prevent hackers from accessing the files on-site.

In case of business websites using shared clouds, experts can often hold a legit website guilty simply because it shares a platform with a nefarious site. This is a classic case of guilt by association that is sadly still prominent on the web. Google can penalize websites that are simply operating via the same cloud, for sharing platform with another site that indulges in severe black hat techniques.

Provides the latest technologies

From this perspective, it is true that private hosting solutions are the safest since they provided the highest qualities of security and managed to host. Nonetheless, hybrid cloud solutions are also improving over the years, and currently, you will find more than one that promises high-end security measures for all its client websites.

However, when we think about the latest innovative technologies and infrastructures, hybrid cloud systems always take the champion’s trophy. Private systems have their pros, but with hybrid systems, the packages are more flexible. The latter is known to offer the biggest number of opportunities regarding infrastructure for entrepreneurs and website owners.

All business owners, who want the safety of a private cloud, but want to pay the prices of a public cloud, should opt for hybrid infrastructure for their business model.

Source: How are hybrid clouds taking over the hosting environment in 2018? by thomassujain

Sears’ big-data strategy? Just a service call away

If you’d like to see less of your Sears SHLD -0.86% repairman, rest assured, the feeling is mutual.

The venerable (but unprofitable) department store, which is the single largest seller of home appliances in the U.S. and installed 4.5 million of them last year, recently opened a new technology center in Seattle. One of its mandates? Mine data gleaned from the tens of millions of visits that Sears technicians have made to American homes over decades to more effectively diagnose a problem that an air-conditioning unit or dishwasher is having—well before a service call is made.

fiv-07-01-15

That’s right: The Sears repairman, clad in his royal-blue shirt, is as valuable a data vehicle as a cookie stored in your web browser. With 7,000 technicians, Sears is the biggest repair service in the country, visiting 8 million homes a year. Its technicians have catalogued hundreds of millions of records, taking note of the location, model, and make—Sears services a wide array of brands, not just its own 102-year-old Kenmore line—on each visit, so its diagnostic technology can calculate the complexity of a repair as well as a cost and time estimate.

sears_chart

The upside of that data crunching? A reduction in the number of times Sears must dispatch technicians, saving the retailer a nice chunk of change at a time when its sales are flagging, sparing customers a lot of aggravation, and helping it snatch away business from competing repair services. Industrywide, service calls fix the problem on the first visit 75% of the time; Sears’ lofty goal is to get that to a 95% resolution rate. (The company won’t disclose its current rate, saying only that it is above average.)

“How do we leverage the data we have and our digital experience to disrupt a pretty sleepy industry?” asks Arun Arora, a former Staples SPLS -0.13% and Groupon GRPN 0.62% executive who now oversees home appliances and services for Sears. “We’re going to use the digital backbone and data we have that we have not uniquely taken advantage of.”

Its new facility also gives Sears a plum spot in the emerging market for smart home tech and services, something that fits well into CEO Eddie Lampert’s strategy to revive the retailer and reinvent it by turning it into—what else?—more of a technology company.

To read the original article on Fortune, click here.

Originally Posted at: Sears’ big-data strategy? Just a service call away by analyticsweekpick

A Check-Up for Artificial Intelligence in the Enterprise

As organizations get ready to invest (or further invest) in AI, some recent research efforts offer insight into what the status quo is around AI in the enterprise and the barriers that could impede adoption. 

According to a recent Teradata study, 80% of IT and business decision-makers have already implemented some form of artificial intelligence (AI) in their business.

The study also found that companies have a desire to increase AI spending. Forty-two percent of respondents to the Teradata study said they thought there was more room for AI implementation across the business, and 30% said their organizations weren’t investing enough in AI.

Forrester recently released their 2018 predictions and also found that firms have an interest investing in AI. Fifty-one percent of their 2017 respondents said their firms were investing in AI, up from 40% in 2016, and 70% of respondents said their firms will have implemented AI within the next 12 months.

While the interest to invest in and grow AI implementation is there, 91% of respondents to the Teradata survey said they expect to see barriers get in the way of investing in and implementing AI.

Forty percent of respondents to the Teradata study said a lack of IT infrastructure was preventing Ai implementation, making it their number one barrier to AI. The second most cited challenge, noted by 30% of Teradata respondents, was lack of access to talent and understanding.

“A lot of the survey results were in alignment with what we’ve experienced with our customers and what we’re seeing across all industries – talent continues to be a challenge in an emerging space,” says Atif Kureishy, Global Vice President of Emerging Practices at Think Big Analytics, a Teradata company.

When it comes to barriers to AI, Kureishy thinks that the greatest obstacles to AI are actually found much farther down the list noted by respondents.

“The biggest challenge [organizations] need to overcome is getting access to data. It’s the seventh barrier [on the list], but it’s the one they need to overcome the most,” says Kureishy.

Kureishy believes that because AI has the eye of the C-suite, organizations are going to find the money and infrastructure and talent. “But you need access to high-quality data, that drives training of these [AI] models,” he says.

Michele Goetz, principal analyst at Forrester and co-author of the Forrester report, “Predictions 2018: The Honeymoon For AI Is Over,” also says that data could be the greatest barrier to AI adoption.

“It all comes down to, how do you make sure you have the right data and you’ve prepared it for your AI algorithm to digest,” she says.

How will companies derive value out of AI? Goetz says in this data and insights-driven business world, companies are looking to use insights to improve experiences with customers. “AI is really recognized by companies as a way to create better relationships and better experiences with their customers,” says Goetz.

One of the most significant findings that came out of the Forrester AI research, says Goetz, is that AI will have a major impact on the way companies think about their business models.

“It is very resource intensive to adopt [AI] without a clear understanding of what [it] is going to do,” says Goetz, “So, you’re seeing there’s more thought going into [the question of] how will this change my business process.”

The Forrester Predictions research also showed that 20% of firms will use AI to make business decisions and prescriptive recommendations for employees and customers. In other words, “machines will get bossy,” says Goetz.

Goetz also says that AI isn’t about replacing employees, it’s about getting more value out of them. “Instead of focusing on drudge work or answering questions that a virtual agent can answer, you can allow those employees to be more creative and think more strategically in the way that they approach tasks.”

And in terms of how you can get a piece of the AI pie? Focus your growth on data engineering skills. Forrester predicts that the data engineer will be the new hot job in 2018.

A Udacity blog post describes data engineers as, “responsible for compiling and installing database systems, writing complex queries, scaling to multiple machines, and putting disaster recovery systems into place.” In essence, they set the data up for data scientists to do the analysis. They also often have a background in software engineering. And according to data gathered in June of 2017 and noted in the Forrester Predictions report, 13% of data-related job postings on Indeed.com were for data engineers, while fewer than 1% were for data scientists.

The post A Check-Up for Artificial Intelligence in the Enterprise appeared first on Think Big.

Originally Posted at: A Check-Up for Artificial Intelligence in the Enterprise by analyticsweekpick

#FutureOfData Podcast: Peter Morgan, CEO, Deep Learning Partnership – Playcast – Data Analytics Leadership Playbook Podcast

* ERRATA (As Reported by Peter: “The book Peter mentioned (at 46:20) by Stuart Russell, “Do the Right Thing”, was published in 2003, and not recently”

In this session Peter Morgan, CEO Deep Learning Partnership sat with Vishal Kumar, CEO AnalyticsWeek and shared his thoughts around Deep Learning, Machine Learning and Artificial Intelligence. They’ve discussed some of the best practices when it comes to picking right solution, right vendor and what are some of the keyword means.

Here’s Peter’s Bio:
Peter Morgan is a scientist-entrepreneur starting out in high energy physics enrolled in the PhD program at the University of Massachusetts at Amherst. After leaving UMass, and founding my own company, Peter has moved into computer networks, designing, implementing and troubleshooting global IP networks for companies such as Cisco, IBM and BT Labs. After getting an MBA and dabbling in financial trading algorithms. Peter has worked for three years on an experiment lead by Stanford University to measure the mass of the neutrino. Since 2012. He had been working in Data Science and Deep Learning, founding an AI Solutions company in Jan 2016.

As an entrepreneur Peter has founded companies in the AI, social media, and music industries. He has also served on the advisory board of technology startups. Peter is a popular speaker at conferences, meetups and webinars. He has cofounded and currently organize meetups in the deep learning space. Peter has business experience in the USA, UK and Europe.

Today, as CEO of Deep Learning Partnership, He leads the strategic direction and business development across product and services. This includes sales and marketing, lead generation, client engagement, recruitment, content creation and platform development. Deep Learning technologies used include computer vision and natural language processing and frameworks like TensorFlow, Keras and MXnet. Deep Learning Partnership design and implement AI solutions for our clients across all business domains.

Podcast is sponsored by:
TAO.ai(https://tao.ai), Artificial Intelligence Driven Career Coach

Originally Posted at: #FutureOfData Podcast: Peter Morgan, CEO, Deep Learning Partnership – Playcast – Data Analytics Leadership Playbook Podcast by v1shal