6 things that you should know about vMwarevSphere 6.5

vSphere 6.5 offers a resilient, highly available, on-demand infrastructure that is the perfect groundwork for any cloud environment. It provides innovation that will assist digital transformation for the business and make the job of the IT administrator simpler. This means that most of their time will be freed up so that they can carry out more innovations instead of maintaining the status quo. Furthermore, vSpehere is the foundation of the hybrid cloud strategy of VMware and is necessary for cross-cloud architectures. Here are essential features of the new and updated vSphere.

vCenter Server appliance

vCenter is an essential backend tool that controls the virtual infrastructure of VMware. vCenter 6.5 has lots of innovative upgraded features. It has a migration tool that aids in shifting from vSphere 5.5 or 6.0 to vSphere 6.5. The vCenter Server appliance also includes the VMware Update Manager that eliminates the need for restarting external VM tasks or using pesky plugins.

vSphere client

In the past, the front-end client that was used for accessing the vCenter Server was quite old-fashioned and stocky. The vSphere has undergone necessary HTML5 alterations. Aside from the foreseeable performance upgrades, the change also makes this tool cross-browser compatible and more mobile-friendly.  The plugins are no longer needed and the UI has been switched for a more cutting-edge aesthetics founded on the VMware Clarity UI.

Backup and restore

The backup and restore capabilities of the VSpher 6.5 is an excellent functionality that enables clients to back up data on any Platform Services Controller appliances or the vCenter Server directly from the Application Programming Interface(API) or Virtual Appliance Management Interface (VAMI). In addition, it is able to back up both VUM and Auto Deploy implanted within the appliance. This backup mainly consists of files that need to be streamed into a preferred storage device through SCP, FTP(s), or HTTP(s) protocols.

Superior automation capabilities

With regards to automation, VMware vSphere 6.5 works perfectly because of the new upgrades. The new PowerCLI tweak has been an excellent addition to the VMware part because it is completely module-based and the APIs are at present in very high demand. This feature enables the IT administrators to entirely computerize tasks down to the virtual machine level.

 Secure boot

The secure boot element of vSphere comprises the -enabled virtual machines. This feature is available in both Linux and Windows VMs and it allows secure boot to be completed through the clicking of a simplified checkbox situated in the VM properties. After it is enabled, only the properly signed VMs can utilize the virtual environment for booting.

 Improved auditing

The Vsphere 6.5 offers clients improved audit-quality logging characteristics. This aids in accessing more forensic details about user actions. With this feature, it is easier to determine what was done, when, by whom, and if any investigations are essential with regards to anomalies and security threats.

VMware’s vSphere developed out of complexity and necessity of expanding the virtualization market. The earlier serve products were not robust enough to deal with the increasing demands of IT departments. As businesses invested in virtualization, they had to consolidate and simplify their physical server farms into virtualized ones and this triggered the need for virtual infrastructure. With these VSphere 6.5 features in mind, you can unleash its full potential and usage. Make the switch today to the new and innovative VMware VSphere 6.5.

 

Originally Posted at: 6 things that you should know about vMwarevSphere 6.5 by thomassujain

Sep 26, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Data shortage  Source

[ AnalyticsWeek BYTES]

>> Rethinking classical approaches to analysis and predictive modeling by v1shal

>> A Practical Way to Approach Predictive Analytics Accuracy by analyticsweek

>> The Unexpected Connections Between Bitcoin and The Dow by analyticsweek

Wanna write? Click Here

[ FEATURED COURSE]

Data Mining

image

Data that has relevance for managerial decisions is accumulating at an incredible rate due to a host of technological advances. Electronic data capture has become inexpensive and ubiquitous as a by-product of innovations… more

[ FEATURED READ]

Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython

image

Python for Data Analysis is concerned with the nuts and bolts of manipulating, processing, cleaning, and crunching data in Python. It is also a practical, modern introduction to scientific computing in Python, tailored f… more

[ TIPS & TRICKS OF THE WEEK]

Data Analytics Success Starts with Empowerment
Being Data Driven is not as much of a tech challenge as it is an adoption challenge. Adoption has it’s root in cultural DNA of any organization. Great data driven organizations rungs the data driven culture into the corporate DNA. A culture of connection, interactions, sharing and collaboration is what it takes to be data driven. Its about being empowered more than its about being educated.

[ DATA SCIENCE Q&A]

Q:When would you use random forests Vs SVM and why?
A: * In a case of a multi-class classification problem: SVM will require one-against-all method (memory intensive)
* If one needs to know the variable importance (random forests can perform it as well)
* If one needs to get a model fast (SVM is long to tune, need to choose the appropriate kernel and its parameters, for instance sigma and epsilon)
* In a semi-supervised learning context (random forest and dissimilarity measure): SVM can work only in a supervised learning mode

Source

[ VIDEO OF THE WEEK]

@AnalyticsWeek Panel Discussion: Big Data Analytics

 @AnalyticsWeek Panel Discussion: Big Data Analytics

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Everybody gets so much information all day long that they lose their common sense. – Gertrude Stein

[ PODCAST OF THE WEEK]

#FutureOfData Podcast: Conversation With Sean Naismith, Enova Decisions

 #FutureOfData Podcast: Conversation With Sean Naismith, Enova Decisions

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

By then, our accumulated digital universe of data will grow from 4.4 zettabyets today to around 44 zettabytes, or 44 trillion gigabytes.

Sourced from: Analytics.CLUB #WEB Newsletter

How the NFL is Using Big Data

Your fantasy football team just went high tech.

Like many businesses, the National Football League is experimenting with big data to help players, fans, and teams alike.

The NFL recently announced a deal with tech firm Zebra to install RFID data sensors in players’ shoulder pads and in all of the NFL’s arenas. The chips collect detailed location data on each player, and from that data, things like player acceleration and speed can be analyzed.

The NFL plans to make the data available to fans and teams, though not during game play. The thought is that statistics-mad fans will jump at the chance to consume more data about their favorite players and teams.

In the future, the data collection might be expanded. In last year’s Pro Bowl, sensors were installed in the footballs to show exactly how far they were thrown.

Big data on the gridiron
Of course, this isn’t the NFL’s first foray into big data. In fact, like other statistics-dependent sports leagues, the NFL was crunching big data before the term even existed.

However, in the last few years, the business has embraced the technology side, hiring its first chief information officer, and developing its own platform available to all 32 teams. Individual teams can create their own applications to mine the data to improve scouting, education, and preparation for meeting an opposing team.

It’s also hoped that the data will help coaches make better decisions. They can review real statistics about an opposing team’s plays or how often one of their own plays worked rather than relying solely on instinct. They will also, in the future, be able to use the data on an individual player to determine if he is improving.

Diehard fans can, for a fee, access this same database to build their perfect fantasy football team. Because, at heart, the NFL believes that the best fans are engaged fans. They want to encourage the kind of obsessive statistics-keeping that many sport fans are known for.

nfl

Will big data change the game?

It’s hard to predict how this flood of new data will impact the game. Last year, only 14 stadiums and a few teams were outfitted with the sensors. And this year, the NFL decided against installing sensors in all footballs after the politics of last year’s “deflate gate” when the Patriots were accused of under inflating footballs for an advantage.
Still, it seems fairly easy to predict that the new data will quickly make its way into TV broadcast booths and instant replays. Broadcasters love to have additional data points to examine between plays and between games.

And armchair quarterbacks will now have yet another insight into the game, allowing them access (for a fee) to the same information the coaches have. Which will, of course mean they can make better calls than the coaches. Right?

Bernard Marr is a best-selling author, keynote speaker and business consultant in big data, analytics and enterprise performance. His new books are ‘Big Data’ ‘Key Business Analytics’

Source by analyticsweekpick

Sep 19, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Insights  Source

[ AnalyticsWeek BYTES]

>> Future of Public Sector and Jobs in #BigData World #FutureOfData #Podcast by v1shal

>> Black Friday is Becoming Irrelevant, How Retailers Should Survive by v1shal

>> Pascal Marmier (@pmarmier) @SwissRe discuss running data driven innovation catalyst by v1shal

Wanna write? Click Here

[ FEATURED COURSE]

Statistical Thinking and Data Analysis

image

This course is an introduction to statistical data analysis. Topics are chosen from applied probability, sampling, estimation, hypothesis testing, linear regression, analysis of variance, categorical data analysis, and n… more

[ FEATURED READ]

Rise of the Robots: Technology and the Threat of a Jobless Future

image

What are the jobs of the future? How many will there be? And who will have them? As technology continues to accelerate and machines begin taking care of themselves, fewer people will be necessary. Artificial intelligence… more

[ TIPS & TRICKS OF THE WEEK]

Analytics Strategy that is Startup Compliant
With right tools, capturing data is easy but not being able to handle data could lead to chaos. One of the most reliable startup strategy for adopting data analytics is TUM or The Ultimate Metric. This is the metric that matters the most to your startup. Some advantages of TUM: It answers the most important business question, it cleans up your goals, it inspires innovation and helps you understand the entire quantified business.

[ DATA SCIENCE Q&A]

Q:Provide examples of machine-to-machine communications?
A: Telemedicine
– Heart patients wear specialized monitor which gather information regarding heart state
– The collected data is sent to an electronic implanted device which sends back electric shocks to the patient for correcting incorrect rhythms

Product restocking
– Vending machines are capable of messaging the distributor whenever an item is running out of stock

Source

[ VIDEO OF THE WEEK]

@AnalyticsWeek Keynote: The CMO isn't satisfied: Judah Phillips

 @AnalyticsWeek Keynote: The CMO isn’t satisfied: Judah Phillips

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

The goal is to turn data into information, and information into insight. – Carly Fiorina

[ PODCAST OF THE WEEK]

Solving #FutureOfOrgs with #Detonate mindset (by @steven_goldbach & @geofftuff) #FutureOfData #Podcast

 Solving #FutureOfOrgs with #Detonate mindset (by @steven_goldbach & @geofftuff) #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

More than 5 billion people are calling, texting, tweeting and browsing on mobile phones worldwide.

Sourced from: Analytics.CLUB #WEB Newsletter

Mastering Deep Learning with Self-Service Data Science for Business Users

The deployment of deep learning is frequently accompanied by a singular paradox which has traditionally proved difficult to redress. Its evolving algorithms are intelligent enough to solve business problems, but utilizing those algorithms is based on data science particularities business users don’t necessarily understand.

The paucity of data scientists exacerbates this situation, which traditionally results in one of two outcomes. Either deep learning is limited in the amount of use cases for which it’s deployed throughout the enterprise, or the quality of its effectiveness is compromised. Both of these situations fail to actualize the full potential of deep learning or data science.

According to Mitesh Shah, MapR Senior Technologist, Industry Solutions: “The promise of AI is about injecting intelligence into operations so you are actively making customer engagement more intelligent.” Doing so productively implicitly necessitates business user involvement with these technologies.

In response to this realization, a number of different solutions have arisen to provision self-service data science so laymen business users understand how to create deep learning models, monitor and adjust them accordingly, and even explain their results while solving some of their more intractable domain problems.

Most convincingly, there are a plethora of use cases in which deep learning facilitates these boons for “folks who are not data scientists by education or training, but work with data throughout their day and want to extract more value from data,” noted indico CEO Tom Wilde.

Labeled Training Data
The training data required for building deep learning’s predictive models pose two major difficulties for data science. They require labeled output data and massive data quantities to suitably train models for useful levels of accuracy. Typically, the first of these issues was addressed when “the data scientists would say to the subject matter experts or the business line, give us example data labeled in a way you hope the outcome will be predicted,” Wilde maintained. “And the SME [would] say I don’t know what you mean; what are you even asking for?” Labeled output data is necessary for models to use as targets or goals for their predictions. Today, self-service platforms for AI make this data science requisite easy by enabling users to leverage intuitive means of labeling training data for this very purpose. With simple browser-based interfaces “you can use something you’re familiar with, like Microsoft Word or Google Docs,” Wilde said. “The training example pops up in your screen, you underline a few sentences, and you click on a tag that represents the classification you’re trying to do with that clause.”

For instance, when ensuring contracts are compliant with the General Data Protection Regulation, users can highlight clauses for personally identifiable data with examples that both adhere to, and fail to adhere to, this regulation. “You do about a few dozen of each of those, and once you’ve done it you’ve built your model,” Wilde mentioned. The efficiency of this process is indicative of the effect of directly involving business users with AI. According to Shah, such involvement makes “production more efficient to reduce costs. This requires not only AI but the surrounding data logistics and availability to enable this…in a time-frame that enables the business impact.”

Feature Engineering and Transfer Learning
In the foregoing GDPR example, users labeled output training data to build what Wilde referred to as a “customized model” for their particular use case. They are only able to do so this quickly, however, by leveraging a general model and the power of transfer learning to focus the former’s relevant attributes for the business user’s task—which ultimately affects the model’s feature detection and accuracy. As previously indicated, a common data science problem for advanced machine learning is the inordinate amounts of training data required. Wilde commented that a large part of this data is required for “featurization: that’s generally why with deep learning you need so much training data, because until you get to this critical mass of featurization, it doesn’t perform very robustly.” However, users can build accurate custom models with only negligible amounts of training data because of transfer learning. Certain solutions facilitate this process with “a massive generalized model with half a billion labeled records in it, which in turn created hundreds and hundreds of millions of features and vectors that basically creates a vectorization of language,” Wilde remarked. Even better, such generalized models are constructed “across hundreds of domains, hundreds of verticals, and hundreds of use cases” Wilde said, which is why they are readily applicable to the custom models of self-service business needs via transfer learning. This approach allows the business to quickly implement process automation for use cases with unstructured data such as reviewing contracts, dealing with customer support tickets, or evaluating resumes.

Explainability
Another common data science issue circumscribing deep learning deployments is the notion of explainability, which can even hinder the aforementioned process automation use cases. As Shah observed, “AI automates tasks that normally require human intelligence, but does not remove the need for humans entirely. Business users in particular are still an integral part of the AI revolution.” This statement applies to explainability in particular, since it’s critical for people to understand and explain the results of deep learning models in order to gauge their effectiveness. The concept of explainability alludes to the fact that most machine learning models simply generate a numerical output—usually a score—indicative of how likely specific input data will achieve the model’s desired output. With deep learning models in particular, those scores can be confounding because deep learning often does its own feature detection. Thus, it’s exacting for users to understand how models create their particular scores for specific data.

Self-service AI options, however, address this dilemma in two ways. Firstly, they incorporate interactive dashboards so users can monitor the performance of their models with numerical data. Additionally, by clicking on various metrics reflected on the dashboard “it opens up the examples used to make that prediction,” Wilde explained. “So, you actually can track back and see what precisely was used as the training data for that particular prediction. So now you’ve opened up the black box and get to see what’s inside the black box [and] what it’s relying on to make your prediction, not just the number.”

Business Accessible Data Science
Explainability, feature engineering, transfer learning, and labeled output data are crucial data science prerequisites for deploying deep learning. The fact that there are contemporary options for business users to facilitate all of these intricacies suggests how essential the acceptance, and possibly even mastery, of this technology is for the enterprise today. It’s no longer sufficient for a few scarce data scientists to leverage deep learning; its greater virtue is in its democratization for all users, both technical and business ones. This trend is reinforced by training designed to educate users—business and otherwise—about fundamental aspects of analytics. “The MapR Academy on-demand Essentials category offers use case-driven, short, non-lab courses that provide technical topic introductions as well as business context,” Shah added. “These courses are intended to provide insight for a wide variety of learners, and to function as stepping off points to further reading and exploration.”

Ideally, options for self-service data science targeting business users could actually bridge the divide between the technically proficient and those who are less so. “There are two types of people in the market right now,” Wilde said. “You have one persona that is very familiar with AI, deep learning and machine learning, and has a very technical understanding of how do we attack this problem. But then there’s another set of folks for whom their first thought is not how does AI work; their first thought is I have a business problem, how can I solve it?”

Increasingly, the answers to those inquires will involve self-service data science.

Source by jelaniharper

Sep 12, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Productivity  Source

[ AnalyticsWeek BYTES]

>> Validating a Lostness Measure by analyticsweek

>> 6 Factors to Consider Before Building a Predictive Model for Life Insurance by analyticsweek

>> 7 Deadly Sins of Total Customer Experience  by v1shal

Wanna write? Click Here

[ FEATURED COURSE]

Intro to Machine Learning

image

Machine Learning is a first-class ticket to the most exciting careers in data analysis today. As data sources proliferate along with the computing power to process them, going straight to the data is one of the most stra… more

[ FEATURED READ]

Data Science from Scratch: First Principles with Python

image

Data science libraries, frameworks, modules, and toolkits are great for doing data science, but they’re also a good way to dive into the discipline without actually understanding data science. In this book, you’ll learn … more

[ TIPS & TRICKS OF THE WEEK]

Fix the Culture, spread awareness to get awareness
Adoption of analytics tools and capabilities has not yet caught up to industry standards. Talent has always been the bottleneck towards achieving the comparative enterprise adoption. One of the primal reason is lack of understanding and knowledge within the stakeholders. To facilitate wider adoption, data analytics leaders, users, and community members needs to step up to create awareness within the organization. An aware organization goes a long way in helping get quick buy-ins and better funding which ultimately leads to faster adoption. So be the voice that you want to hear from leadership.

[ DATA SCIENCE Q&A]

Q:Provide examples of machine-to-machine communications?
A: Telemedicine
– Heart patients wear specialized monitor which gather information regarding heart state
– The collected data is sent to an electronic implanted device which sends back electric shocks to the patient for correcting incorrect rhythms

Product restocking
– Vending machines are capable of messaging the distributor whenever an item is running out of stock

Source

[ VIDEO OF THE WEEK]

@Schmarzo @DellEMC on Ingredients of healthy #DataScience practice #FutureOfData #Podcast

 @Schmarzo @DellEMC on Ingredients of healthy #DataScience practice #FutureOfData #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

It’s easy to lie with statistics. It’s hard to tell the truth without statistics. – Andrejs Dunkels

[ PODCAST OF THE WEEK]

Understanding Data Analytics in Information Security with @JayJarome, @BitSight

 Understanding Data Analytics in Information Security with @JayJarome, @BitSight

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

According to estimates, the volume of business data worldwide, across all companies, doubles every 1.2 years.

Sourced from: Analytics.CLUB #WEB Newsletter

How big data is driving smarter cyber security tools

cyber-security3

As big data changes and develops, it’s being used to create better, smarter cyber security tools. There is real value to using the big data approach to cyber security – especially when it can be used to identify dangerous malware and more persistent threats to the IT security of big companies that handle a lot of data. The number of data breaches in the news seems to grow all the time, and big data may play a big role in preventing much of that.

Data Storage

One of the ways in which big data can help with cyber security is through the storage of data. Because so much data is collected and stored easily, analytic techniques can be used to find and destroy malware. Smaller segments of data can be analyzed, of course, and were analyzed before big data got started in the cyber security area, but the more data that can be looked at all together, the easier it is to ensure that appropriate steps are taken to neutralize any threats. More data gets screened, and it gets analyzed faster, making big data a surprisingly good choice in the cyber security arena.

Malware Behaviors

In the past, malware was usually identified with signatures. Now that big data is involved, that’s not realistic. The signature identification concept isn’t realistic on a larger scale, so new ways of handling cyber security were needed as soon as big data appeared on the scene. Instead of signature, big data looks at behaviors. How malware or any other type of virus behaves is a very important consideration, and something to focus on when it comes to what can be done to ensure that data is safe.

When something is flagged as having a unique or different behavior, it’s possible to isolate the data that has that with it, so it can be determined if the data is safe. Piggybacking malware onto programs and data that are seemingly innocuous is common, because it lets people pass things through before the problem is realized. When behavior is properly tracked, though, the level at which these viruses are allowed to get through is greatly reduced. There are no guarantees, because malware is always changing and new ones are being developed, but the protection offered by big data is large and significant.

Computing Power

The computer power offered by big data is possibly the most significant reason it is so valuable when it comes to detecting and stopping malware. Fast, powerful computers can process data and information so much faster than slower ones that are not able to harness a high level of power. Because of that, there exists the opportunity for more sophisticated techniques for detecting malware when big data is used. The models that can be built for the identification of malware are significant, and big data is the place to build them.

With the power available, it is becoming easier than ever before to find problems before they get started, so malware can be stopped before it advances through a computer system or set of data. That protects the information contained there, and also the system itself from attack and infection. Those who produce malware continually try to change the game so they won’t be detected, but as computer power advances the chances of malware avoiding detection continue to shrink.

To read the original article on IT Learning Center, click here.

Source: How big data is driving smarter cyber security tools

Sep 05, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Fake data  Source

[ AnalyticsWeek BYTES]

>> Big Data Insights in Healthcare, Part I. Great Ideas Transcend Time by froliol

>> Top 10 ways in which Google Analytics help online businesses by thomassujain

>> Office Depot Stitches Together the Customer Journey Across Multiple Touchpoints by analyticsweek

Wanna write? Click Here

[ FEATURED COURSE]

Tackle Real Data Challenges

image

Learn scalable data management, evaluate big data technologies, and design effective visualizations…. more

[ FEATURED READ]

The Future of the Professions: How Technology Will Transform the Work of Human Experts

image

This book predicts the decline of today’s professions and describes the people and systems that will replace them. In an Internet society, according to Richard Susskind and Daniel Susskind, we will neither need nor want … more

[ TIPS & TRICKS OF THE WEEK]

Data Analytics Success Starts with Empowerment
Being Data Driven is not as much of a tech challenge as it is an adoption challenge. Adoption has it’s root in cultural DNA of any organization. Great data driven organizations rungs the data driven culture into the corporate DNA. A culture of connection, interactions, sharing and collaboration is what it takes to be data driven. Its about being empowered more than its about being educated.

[ DATA SCIENCE Q&A]

Q:Explain what a local optimum is and why it is important in a specific context,
such as K-means clustering. What are specific ways of determining if you have a local optimum problem? What can be done to avoid local optima?

A: * A solution that is optimal in within a neighboring set of candidate solutions
* In contrast with global optimum: the optimal solution among all others

* K-means clustering context:
It’s proven that the objective cost function will always decrease until a local optimum is reached.
Results will depend on the initial random cluster assignment

* Determining if you have a local optimum problem:
Tendency of premature convergence
Different initialization induces different optima

* Avoid local optima in a K-means context: repeat K-means and take the solution that has the lowest cost

Source

[ VIDEO OF THE WEEK]

Understanding How Fitness Tracker Works via @SciThinkers #STEM #STEAM

 Understanding How Fitness Tracker Works via @SciThinkers #STEM #STEAM

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Numbers have an important story to tell. They rely on you to give them a voice. – Stephen Few

[ PODCAST OF THE WEEK]

@JustinBorgman on Running a data science startup, one decision at a time #Futureofdata #Podcast

 @JustinBorgman on Running a data science startup, one decision at a time #Futureofdata #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Market research firm IDC has released a new forecast that shows the big data market is expected to grow from $3.2 billion in 2010 to $16.9 billion in 2015.

Sourced from: Analytics.CLUB #WEB Newsletter

20 Best Practices for Customer Feedback Programs: Strategy and Governance

Below is the next installment of the 20 Best Practices for Customer Feedback Programs. Today’s post covers best practices in Strategy and Governance.

Strategy/Governance Best Practices

Strategy
Strategy reflects the overarching, long-term plan of a company that is designed to help the company attain a specific goal. For customer-centric companies, the strategy is directed at improving the customer experience.

A successful customer feedback program is dependent on the support of top management. Any company initiative (including a customer-centric initiative) without the full support of senior executives will likely fail.

The company culture is directly impacted by senior executives. Because loyalty leaders understand that the formal company strategy and accompanying mission statement set the general culture of the company, they embed the importance of the customer into their mission statements. These customer-centric mission statements instill a set of company values and implicit performance standards about addressing customers’ needs. The customer-centric standards shared among the employees act as guidelines with respect to the behaviors that are expected of the employees.

Governance

Figure 2. Customer Feedback Program Governance Components
Figure 2. Customer Feedback Program Governance Components

While strategy is necessary to build a customer-centric culture, companies need to create formal policy around the customer feedback program that supports the strategy. The governance surrounding the customer feedback program helps foster and maintain a customer-centric culture by operationalizing the strategy (See Figure 2).

Three important areas of governance are:

  1. Guidelines and Rules. These guidelines and rules reflect the set of processes, customs and policies affecting the way the program is directed, administered or controlled. These policies formalize processes around the customer feedback program and need to be directed at all company’s constituents, including board members, senior executives, middle managers, and front-line employees. In a customer-centric company, the work-related behaviors of each of the constituencies are aimed at satisfying customers’ needs. As such, customer-centric metrics are used to set and monitor company goals, manage employee behavior and incentivize employees.
  2. Roles and Responsibilities. Need to define and clearly communicate roles/responsibilities across diverse constituency (e.g., board, executives, managers, individual contributor). The definition of the roles and responsibilities need to include how data are used and by whom. Specifically, program guidelines include the way the feedback data from the program are used in different business decision-making processes (resource allocation, employee incentive compensations, account management), each requiring specific employee groups to have access to different types of analytic reports of the customer feedback data.
  3. Change Request. Need to define how changes to the customer feedback program will occur.

The quality of the policies around the use of the customer feedback data will have an impact on the success of the program. Vague policies regarding how the customer feedback program is executed, including analytical methods and goals, dissemination of results, and data usage of the customer feedback data, will ultimately lead to less than optimal effectiveness of the program.

Corporate strategy and governance of the customer feedback program are exhibited in a variety ways by loyalty leaders, from resource allocation in supporting customer initiatives to the using public forums to communicate the company’s vision and mission to its constituents. Executive support and use of customer feedback data as well as company-wide communication of the customer feedback program goals and results helps embed the customer-centric culture into the company milieu. Loyalty leading companies’ use of customer feedback in setting strategic goals helps keep the company customer-focused from the top. Additionally, their use of customer feedback in executive dashboards and for executive compensation ensures the executive team’s decisions will be guided by customer-centric issues. A list of best practices in Strategy and Governance is located in Table 2.

Table 2. Best Practices in Strategy/Governance
Best Practices The specifics…
1. Incorporate a customer-focus in the vision/mission statement Support the company mission by presenting customer-related information (e.g., customer satisfaction/loyalty goals) in the employee handbook. Use customer feedback metrics to set and monitor company goals.
2. Identify an executive as the champion of the customer feedback program A senior level executive “owns” the customer feedback program and reports customer feedback results at executive meetings. Senior executives evangelize the customer feedback program in their communication with employees and customers. Senior executives receive training on the customer feedback program.
3. Incorporate customer feedback as part of the decision-making process Include customer metrics in company’s balanced scorecard along with other, traditional scorecard metrics. This practice will ensure executives and employees understand the importance of these metrics and are aware of current levels of customer satisfaction/loyalty. Present customer feedback results in company meetings and official documents.
4. Use customer feedback metrics in incentive compensation for executives and front-line employees Use key performance indicators and customer loyalty metrics to measure progress and set performance goals. Ensure these measures can be impacted by employee behavior. Where possible, use objective business metrics that are linked to customer satisfaction as key performance indicators on which to build employee incentive programs (see Applied Research).
5. Build accountability for customer satisfaction/ loyalty goals into the company Incorporate customer feedback metrics into key performance measures for all employees. Include customer-centric goals in the company’s performance management system/processes. Employees set customer satisfaction goals as part of their performance objectives.
Copyright © 2011 Business Over Broadway

Take the Customer Feedback Programs Best Practices Survey

You can take the best practices survey to receive free feedback on your company’s customer feedback program. This self-assessment survey assesses the extent to which your company adopts best practices throughout their program. Go here to take the free survey: http://businessoverbroadway.com/resources/self-assessment-survey.

 

Source: 20 Best Practices for Customer Feedback Programs: Strategy and Governance