Rules-Based Versus Dynamic Algorithms: The Fate of Artificial Intelligence

With so many recent headlines pertaining to advanced machine learning and deep learning, it may be easy to forget that at one point in the lengthy history of Artificial Intelligence, the term largely denoted relatively simple, rules-based algorithms.

According to TopQuadrant CEO Irene Polikoff, “It’s interesting, AI used to be synonymous with rules and expert systems. These days, it seems to be, in people’s minds, synonymous with machine learning.”

In contemporary enterprise settings, the latter is applauded for its dynamic mutability while the former is derided for a static rigidity not, so the argument goes, emblematic of truly intelligent systems. If humans are devising the rules, is it truly AI?

Nonetheless, certain aspects of rules-based, ‘algorithmic AI’ persist in part because of their applicability to different use cases, in addition to machine learning’s shortcomings. The most notable is the ‘black box’ phenomenon (highly prevalent in facets of unsupervised learning and deep learning) in which the results of machine learning models are difficult to explain.

A closer examination of the utility and drawbacks of each approach indicates that in many cases pertaining to automation, the two balance each other for explainable, trustworthy intelligent systems and solutions.

Machine Learning Algorithms
Machine learning algorithms are widely acclaimed for their automation capabilities, which have produced palpable business value for data management and data engineering mainstays for some time now. However, they also deliver the same results for specific facets of data governance. When ensuring that captured data conforms to business glossary definitions for consistent reuse throughout the enterprise without ambiguity, it’s useful to automate the tagging of data in accordance to those normative terms and business concepts. Machine learning is an integral means of automating this process. For example, when using what Polikoff referenced as “controlled vocabularies” to tag documents stemming from content management systems for regulatory compliance or other governance needs, “machine learning is used to find the most right placed term that applies to documents,” Polikoff revealed.

Human in the Loop and Explainability
There are two critical considerations for this (and other) automation use cases of supervised machine learning. The first is that, despite the fact that certain machine learning algorithms will eventually be able to readily incorporate the results of previous results to increase the accuracy of future results, the learning is far from autonomous. “There is some training involved; even after you train there’s users-in-the-loop to view the tags and accept them or reject them,” Polikoff mentioned. “That could be an ongoing process or you could decide at some point to let it run by itself.” Those who choose the latter option may encounter the black box phenomenon in which there’s limited explainability for the results of machine learning algorithms and the models that produced them. “With machine learning, what people are starting to talk about more and more today is how much can we rely on something that’s very black box?” Polikoff said. “Who is at fault if it goes wrong and there are some conclusions where it’s not correct and users don’t understand how this black box operates?”

Algorithmic AI
Conversely, there’s never a lack of explainability associated with rules-based AI in which humans devise the rules upon which algorithms are based. Transparent understanding of the results of such algorithms are their strength; their immutability is often considered their weakness when compared with dynamic machine learning algorithms. However, when attempting to circumscribe the black box effect “to some extent rules address them,” Polikoff maintained. “The rule is clearly defined; you can always examine it; you can seek it. Rules are very appropriate. They’re more powerful together [with machine learning].” The efficacy of the tandem of rules and machine learning is duly demonstrated in the data governance tagging use case, which is substantially enhanced by deploying a standards-based enterprise knowledge graph to represent the documents and their tags in conjunction with vocabularies. According to Polikoff, “you can have from one perspective a controlled vocabulary with some rules in it, and from another perspective you have machine learning. You can combine both.”

In this example machine learning would be deployed to “find [the] most likely tags in the document, look at the rules about the concepts those tags represent, and add more knowledge based on that,” Polikoff said. Implicit to this process are the business rules for the terms upon which the tags are created, which helps define them. Equally valuable is the knowledge graph environment which can link the knowledge gleaned from the tagging to other data, governance concepts, and policies. The aforementioned rules, in the form of vocabularies or a business glossary, aggrandize machine learning’s automation for more accurate results.

Synthesized Advantages
The mutable nature of machine learning algorithms doesn’t mean the end of rules or the value extracted from rules-based, algorithmic AI. Both can work simultaneously to enrich each other’s performance, particularly for automation use cases. The addition of rules can increase the explainability for machine learning, resulting in greater understanding of the results of predictive models. When leveraged in linked data settings, there’s the potential for “a combination of machine learning and inferencing working together and ultimately, since both of them are using a knowledge graph for the presentation of the knowledge and the presentation of the data, that makes for clarity,” Polikoff remarked. “It’s quite a smooth and integrated environment where you can combine those processes.”

Originally Posted at: Rules-Based Versus Dynamic Algorithms: The Fate of Artificial Intelligence by jelaniharper

Dec 27, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Correlation-Causation  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Qlik Optimization Techniques by analyticsweek

>> Making sense of unstructured data by turning strings into things by v1shal

>> 2017 Trends in the Internet of Things by jelaniharper

Wanna write? Click Here

[ NEWS BYTES]

>>
 Why hybrid cloud is here to stay – TechCentral Under  Hybrid Cloud

>>
 Data breaches persuade companies to raise cyber security budgets – Financial Times Under  cyber security

>>
 Big Data Analytics in Banking Market 2025: Global Demand, Key Players, Overview, Supply and Consumption Analysis – Honest Facts Under  Big Data Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

CPSC 540 Machine Learning

image

Machine learning (ML) is one of the fastest growing areas of science. It is largely responsible for the rise of giant data companies such as Google, and it has been central to the development of lucrative products, such … more

[ FEATURED READ]

Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 4th Edition

image

The eagerly anticipated Fourth Edition of the title that pioneered the comparison of qualitative, quantitative, and mixed methods research design is here! For all three approaches, Creswell includes a preliminary conside… more

[ TIPS & TRICKS OF THE WEEK]

Grow at the speed of collaboration
A research by Cornerstone On Demand pointed out the need for better collaboration within workforce, and data analytics domain is no different. A rapidly changing and growing industry like data analytics is very difficult to catchup by isolated workforce. A good collaborative work-environment facilitate better flow of ideas, improved team dynamics, rapid learning, and increasing ability to cut through the noise. So, embrace collaborative team dynamics.

[ DATA SCIENCE Q&A]

Q:What is the Central Limit Theorem? Explain it. Why is it important?
A: The CLT states that the arithmetic mean of a sufficiently large number of iterates of independent random variables will be approximately normally distributed regardless of the underlying distribution. i.e: the sampling distribution of the sample mean is normally distributed.
– Used in hypothesis testing
– Used for confidence intervals
– Random variables must be iid: independent and identically distributed
– Finite variance

Source

[ VIDEO OF THE WEEK]

#HumansOfSTEAM feat. Hussain Gadwal, Mechanical Designer via @SciThinkers #STEM #STEAM

 #HumansOfSTEAM feat. Hussain Gadwal, Mechanical Designer via @SciThinkers #STEM #STEAM

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

It’s easy to lie with statistics. It’s hard to tell the truth without statistics. – Andrejs Dunkels

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData with Jon Gibs(@jonathangibs) @L2_Digital

 #BigData @AnalyticsWeek #FutureOfData with Jon Gibs(@jonathangibs) @L2_Digital

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Retailers who leverage the full power of big data could increase their operating margins by as much as 60%.

Sourced from: Analytics.CLUB #WEB Newsletter

Analytics for government: where big data and Big Brother collide

bus_rep_Logi-Analy_3396968b

There is rightfully a lot of hype around e-government. The application of analytics in the private sector has had a significant impact on our lives.

And, at first blush, it seems like a great idea for our governments to be more like Google or Amazon, using data and analytics to deliver improved services more cost effectively, when and where people need them.

However, while many of the benefits found in the private sector can translate directly to application in the public sector, there are hurdles our governments will have to clear that the Googles of the world simply dodge.

A lot is already happening in e-government. The Glasgow Smart City initiative applies a combination of advanced technologies to benefit the people of Glasgow. Traffic management, more efficient policing, optimising green technologies, improving public transportation and many other initiatives are all driven by the application of technology and data analytics.

Transparency and privacy are two key areas where the public sector will not be able to rely on the private sector for innovation

We also see examples such as Torbay Council and the London Borough of Islington using analytics to drive efficiency in the delivery of services and to increase transparency.

Torbay Council makes available expenditure data on its public website to increase transparency, while using analytics internally to help budget holders run their services more efficiently.

The London Borough of Islington was able to save £800,000 annually by combining CCTV data with operational data to create dashboards that helped them to deploy parking enforcement personnel more effectively, as well as reduce ticket processing time from six months to four days.

On both grand and more pedestrian scales, analytics is improving public services.

The benefits of applying analytics in government are real, but the public sector should be cautious about simply taking the experience of the private sector and trying to apply it directly.

The public sector will need to carefully rethink the often adversarial nature analytics can take in the private sector. Amazon’s recommendation algorithms may be “cool”, but the algorithms are not your friend. They are there to get you to spend more.

It is critical that the public understands how data is being used and participates in managing that process

Transparency and privacy are the two key concerns in which the public sector will not be able to rely on the private sector for innovation.

Data ownership, as an example, is an area in which companies such as Google and Amazon are not good role models. Amazon owns my purchase history, but should the Government “own” my health data?

Amazon can use my purchase data for any purpose it sees fit without telling me who is accessing it or why. Should government CCTV data be treated the same?

This is not a good model for e-government to follow. In fact, the challenge was highlighted earlier this year by the Government’s surveillance camera commissioner Tony Porter. Algorithms are able to predict behaviour and automatically track individuals.

It is critical that the public understands how data is being used and participates in managing that process. This is where the public sector will need to drive new innovations, educating citizens and empowering them to participate in controlling their data and its usage.

In other words, delivering data-driven government while keeping Big Brother at bay.

Note: This article originally appeared in Telegraph. Click for link here.

Originally Posted at: Analytics for government: where big data and Big Brother collide

Is Big Data The Most Hyped Technology Ever?

I read an article today on the topic of Big Data. In the article, the author claims that the term Big Data is the most hyped technology ever, even compared to such things as cloud computing and Y2K. I thought this was a bold claim and one that is testable. Using Google Trends, I looked at the popularity of three IT terms to understand the relative hype of each (as measured by number of searches on the topic): Web 2.0, cloud computing and big data. The chart from Google Trends appears below.

We can learn a couple of things from this graph. First, the interest in Big Data continues to grow since its first measurable growth appeared in early 2011. Still, the number of searches for the respective terms clearly shows that Web 2.0 and cloud computing received more searches than Big Data. While we don’t know if interest in Big Data will continue to grow, Google Trends, in fact, predicts very a very slow growth rate for Big Data through the end of 2015.

Second, the growth rates of Web 2.0 and cloud computing are faster compared to the growth rate of Big Data, showing that public interest grew more quickly for those terms than for Big Data. Interest in Web 2.0 reached its maximum in a little over 2 years since its initial ascent. Interest in cloud computing reached its peak in about 3.5 years. Interest in Big Data has been growing steadily for over 3.7 years.

One thing of interest. For these three technology terms, the growth of the two latter technology terms started at the peak of the previous term. As one technology becomes commonplace, another takes its place.

So, is Big Data the most hyped technology ever? No.

Originally Posted at: Is Big Data The Most Hyped Technology Ever?

Dec 20, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data Mining  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> January 9, 2017 Health and Biotech analytics news roundup by pstein

>> 3 Risks of an Analytics Emergency by analyticsweek

>> The Impact of Big Data in Market Risk Management by chandrakant721

Wanna write? Click Here

[ NEWS BYTES]

>>
 CSP (CSPI) and Verint Systems (VRNT) Head-To-Head Survey – Fairfield Current Under  Social Analytics

>>
 Global Financial Analytics Market 2017 by complete Analysis by Type (Centralized Controlled VPP, Decentralized … – Industry News Updates (press release) (blog) Under  Financial Analytics

>>
 LinkedIn launches Talent Insights for HR analytics, talent planning – ZDNet Under  Talent Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

CS229 – Machine Learning

image

This course provides a broad introduction to machine learning and statistical pattern recognition. … more

[ FEATURED READ]

On Intelligence

image

Jeff Hawkins, the man who created the PalmPilot, Treo smart phone, and other handheld devices, has reshaped our relationship to computers. Now he stands ready to revolutionize both neuroscience and computing in one strok… more

[ TIPS & TRICKS OF THE WEEK]

Data Have Meaning
We live in a Big Data world in which everything is quantified. While the emphasis of Big Data has been focused on distinguishing the three characteristics of data (the infamous three Vs), we need to be cognizant of the fact that data have meaning. That is, the numbers in your data represent something of interest, an outcome that is important to your business. The meaning of those numbers is about the veracity of your data.

[ DATA SCIENCE Q&A]

Q:How do you handle missing data? What imputation techniques do you recommend?
A: * If data missing at random: deletion has no bias effect, but decreases the power of the analysis by decreasing the effective sample size
* Recommended: Knn imputation, Gaussian mixture imputation

Source

[ VIDEO OF THE WEEK]

Using Analytics to build A #BigData #Workforce

 Using Analytics to build A #BigData #Workforce

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Data beats emotions. – Sean Rad, founder of Ad.ly

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @Beena_Ammanath, @GE

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @Beena_Ammanath, @GE

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Poor data across businesses and the government costs the U.S. economy $3.1 trillion dollars a year.

Sourced from: Analytics.CLUB #WEB Newsletter

Dec 13, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Ethics  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Why Entrepreneurship Should Be Compulsory In Schools by v1shal

>> How Google does Rapid Prototyping? Tom Chi’s Perspective  by v1shal

>> Geeks Vs Nerds [Infographics] by v1shal

Wanna write? Click Here

[ NEWS BYTES]

>>
 Is Your Company Prepared to Handle a Data Security Incident? An Incident Response Plan Is An Essential Element … – Lexology Under  Data Security

>>
 Learn Data Science and Fuel AI Innovations with This Python Bundle – Interesting Engineering Under  Data Science

>>
 Facebook to release first-party cookie option for ads, pull web analytics from Safari – Marketing Land Under  Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

A Course in Machine Learning

image

Machine learning is the study of algorithms that learn from data and experience. It is applied in a vast variety of application areas, from medicine to advertising, from military to pedestrian. Any area in which you need… more

[ FEATURED READ]

The Misbehavior of Markets: A Fractal View of Financial Turbulence

image

Mathematical superstar and inventor of fractal geometry, Benoit Mandelbrot, has spent the past forty years studying the underlying mathematics of space and natural patterns. What many of his followers don’t realize is th… more

[ TIPS & TRICKS OF THE WEEK]

Winter is coming, warm your Analytics Club
Yes and yes! As we are heading into winter what better way but to talk about our increasing dependence on data analytics to help with our decision making. Data and analytics driven decision making is rapidly sneaking its way into our core corporate DNA and we are not churning practice ground to test those models fast enough. Such snugly looking models have hidden nails which could induce unchartered pain if go unchecked. This is the right time to start thinking about putting Analytics Club[Data Analytics CoE] in your work place to help Lab out the best practices and provide test environment for those models.

[ DATA SCIENCE Q&A]

Q:Give examples of data that does not have a Gaussian distribution, nor log-normal?
A: * Allocation of wealth among individuals
* Values of oil reserves among oil fields (many small ones, a small number of large ones)

Source

[ VIDEO OF THE WEEK]

@AmyGershkoff on building #winning #DataScience #team #FutureOfData #Podcast

 @AmyGershkoff on building #winning #DataScience #team #FutureOfData #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

You can have data without information, but you cannot have information without data. – Daniel Keys Moran

[ PODCAST OF THE WEEK]

Pascal Marmier (@pmarmier) @SwissRe discusses running data driven innovation catalyst

 Pascal Marmier (@pmarmier) @SwissRe discusses running data driven innovation catalyst

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Within five years there will be over 50 billion smart connected devices in the world, all developed to collect, analyze and share data.

Sourced from: Analytics.CLUB #WEB Newsletter

How to Define KPIs for Successful Business Intelligence

Realizing that you can only improve what you measure is a good way to think about KPIs. Often companies want to improve different aspects of their business all at once, but can’t put a finger on what will measure their progress towards overarching company goals. Does it come down to comparing the growth of last year to this year? Or, is it just about the cost of acquiring new customers?

If you’re nervously wondering now, “wait, what is my cost per deal?”, don’t sweat it. Another growing pain of deciding on KPIs is discovering that there is a lot of missing information.

Defining Your KPIs

Choosing the right KPI is crucial to make effective, data-driven decisions. If you choose the right KPI, it will help to concentrate the efforts of employees towards a meaningful goal, however, choose incorrectly and you could waste significant resources chasing after vanity metrics.

In order to rally the efforts of your team and achieve your long-term objectives, you have to measure the right things. For example, if the goal is to increase revenue at a SaaS company by 25% over the next two quarters, you couldn’t determine success by focusing on the number of likes your Facebook page got. Instead, we could ask questions like: Are we protecting our ARR by retaining our existing customers? Do we want to look at the outreach efforts of our sales development representatives, and whether that results in increased demos and signups? Should we look at the impact of increased training for the sales team on closed deals?

Dashboard Design Banner

Similarly, if we wanted to evaluate the effectiveness of various marketing channels, we need to determine more than an end goal of increasing sales or brand awareness. Instead, we’ll need a more precise definition of success. This might include ad impressions, click through rates, conversion numbers, new email list subscribers, page visits, bounce rates, and much more.

Looking at all these factors will allow us to determine which channels are driving the most traffic and revenue. If we dig a bit deeper, there will be even more insights to discover. In addition to discovering which channels produce traffic most likely to translate into a conversion, we can also learn if other factors such as timing make a difference to reach our target audience.

Of course, every industry and business are different. To establish meaningful KPIs, you’ll need to determine what most clearly correlates with your company’s goals. Here are a few examples:

  • Finance – Working capital, Operating cash flow, Return on equity, Quick ratio, Debt to equity ratio, Inventory turnover, Accounts receivable turnover, Gross profit margin
  • Marketing – Customer acquisition cost, Conversion rate of a particular channel, Percentage of leads generated from a particular channel, Customer Churn, Dormant customers, Average spend per customer
  • Healthcare – Inpatient mortality rate, Bed turnover, Readmission rate, Average length of stay, Patient satisfaction, Total operating margin, Average cost per discharge, Cash receipt to bad debt, Claims denial rate
  • Retail – Gross margin (as a percentage of selling price), Inventory turnover, Sell-through percentage, Average sales per transaction, Percentage of total stock not displayed

If your business is committed to data-driven decision making, establishing the right KPIs is crucial. Although the process of building a performance-driven culture is iterative, clearly defining the desired end result will go a long way towards help you establish effective KPIs that will help focus the efforts of your team towards that goal, whether it’s to move product off shelves faster, create better patient outcomes, or increase your revenue per customer.

The good news is that in the business intelligence world, measuring performance can be especially precise, quick and easy. Yet, the first hurdle every data analyst faces is the initial struggle to choose and agree on company KPIs & KPI tracking. If you are about to embark on a BI project, here’s a useful guide on how to decide what it is that you want to measure:

Step 1: Isolate Pain Points, Identify Core Business Goals

A lot of companies start by trying to quantify their current performance. But again, as a data analyst, the beauty of your job and the power of business intelligence is that you can drill into an endless amount of very detailed metrics. From clicks, site traffic, and conversion rates, to service call satisfaction and renewals, the list goes on. So ask yourself: What makes the company better at what they do?

You can approach this question by focusing on stage growth, where a startup would focus most on metrics that validate business models, whereas an enterprise company would focus on metrics like customer lifetime value analysis. Or, you can examine this question by industry: a services company (consultancies) would focus more on quality of services rendered, whereas a company that develops products would focus on product usage.

Ready to dive in? Start by going from top-down through each department to elicit requirements and isolate the pain points and health factors for every department. Here are some examples of KPI metrics you may want to look at:

Product

  • Product related tickets
  • Customer satisfaction
  • Usage statistics (SaaS products)

Marketing KPIs

  • Brand awareness
  • Conversion rate
  • Site traffic
  • Social shares

R&D

  • Number of bugs
  • Length of development cycle
  • App usage

Step 2: Break It Down to A Few KPIs

Once you choose a few important KPIs, then try to break it down even further. Remember, while there’s no magic number, less is almost always more when it comes to KPIs. That’s because if you track too many KPIs, as a data analyst you may start to lose your audience and the focus of the common business user. Choosing the top 7-10 KPIs is a great number to aim for and you can do that by breaking down your core business goals into a much more specific metric.

Remember, the point of a KPI is to gain focus and align goals for measurable improvement. Spend more time choosing the KPIs than simply throwing too many into the mix, which will just push the question of focus further down the road (and require more work!).

Step 3: Carefully Assess Your Data

sisense-KPIs

After you have your main 7-10 elements – you can start digging into the data and start some data modeling. A good question to ask at this point is: How does the business currently make decisions? Counterintuitively, in order to answer that question you may want to look at where the company is currently not making its decisions based on data, or not collecting the right data.

This is where you get to flex your muscles as a “data hero” or a good analyst! Take every KPI and present it as a business question. Then break the business questions into facts, dimensions, filters, and order (example).

Not every business questions contain all of these elements – but there will always be a fact because you have to measure something. You’ll need to answer the following before moving on:

  • What are the data sources
  • Predict the complexity of your data model
  • Tools to prepare, manage and analyze data (BI solution)

Do this by breaking each KPI into its data components, asking questions like: what do I need to count, what do I need to aggregate, which filters need to apply? For each of these questions, you have to know which the data sources are being used and where the tables coming from.

Consider that data will often come from multiple, disparate data sources. For example, for information on a marketing or sales pipeline, you’ll probably need Google Analytics/Adwords data combined your CRM data. As a data analyst, it’s important to recognize that the most powerful KPIs often comes from a combination of multiple data sources. Make sure you are using the right tools, such as a BI tool that has built-in data connectors, to prepare and join data accurately easily.

Step 4: Represent KPIs in an Accurate and Effective Fashion

Congrats! You’ve connected your KPI data to your business. Now you’ll need to find a way to represent the metrics in the most effective way. Check out some of these different BI dashboard examples for some inspiration.

One tip to keep mind is that the goal of your dashboard is to put everyone on the same page. Still, users will each have their own questions and areas where they want to explore, which is why building an interactive, highly visual BI dashboards are important. Your BI solution should offer interactive dashboards that allow its users to perform basic analytical tasks, such as filtering the views, drilling down, and examining underlying data – all with little training.

See an example:


Profit & Loss - Financial Dashboard

Closing

As a data analyst you should always look for what other insights you can achieve with the data that the business never thought of asking. People are often entrenched in their own processes and as an analyst, you offer an “outsider’s perspective” of sorts, since you only see the data, while others are clouded by their day-to-day business tasks. Don’t be afraid to ask the hard questions. Start with the most basic and you’ll be surprised how big companies don’t know the answers–and you’ll be a data hero just for asking.

Dashboard Design Banner

Source: How to Define KPIs for Successful Business Intelligence by analyticsweek

Follow the Money: The Demand for Deep Learning

Numbers don’t lie.

According to CB Insights, 100 of the most promising private startups focused on Artificial Intelligence raised $11.7 billion in equity funding in 367 deals during 2017. Several of those companies focus on deep learning technologies, including the most well-funded, ByteDance, which accounts for over a fourth of 2017’s private startup funding with 3.1 billion dollars raised.

In the first half of last year alone, corporate venture capitalists contributed nearly 2 billion dollars of disclosed equity funding in 88 deals to AI startups, which surpassed the total financing for AI startups for all of 2016. The single largest corporate venture capitalist deal in the early part of 2017 was the $600 million Series D funding awarded to NIO, an organization based in China that specializes in autonomous vehicles (among other types of craft), which relies on aspects of deep learning.

According to Forrester, venture capital funding activity in computer vision increased at a CAGR of 137% from 2015 to 2017. Most aspects of advanced pattern recognition, including speech, image, facial recognition and others, hinge on deep learning. A Forbes post noted, “Google, Baidu, Microsoft, Facebook, Salesforce, Amazon, and all other major players are talking about – and investing heavily in – what has become known as “deep learning”. Indeed, both Microsoft and Google have created specific entities to fund companies specializing in AI.

According to Razorthink CEO Gary Oliver, these developments are indicative of a larger trend in which, “If you look at where the investments are going from the venture community, if you look at some of the recent reports that have come out, the vast majority are focused on companies that are doing deep learning.”

Endless Learning
Deep learning is directly responsible for many of the valuable insights organizations can access via AI, since it can rapidly parse through data at scale to discern patterns that are otherwise too difficult to see or take too long to notice. In particular, deep learning actuates the unsupervised prowess of machine learning by detecting data-driven correlations to business objectives for variables on which it wasn’t specifically trained. “That’s what’s kind of remarkable about deep learning,” maintained Tom Wilde, CEO of indico, which recently announced $4 million in new equity seed funding. “That’s why when we see it in action we’re always like whoa, that’s pretty cool that the math can decipher that.” Deep learning’s capacity for unsupervised learning makes is extremely suitable for analyzing semi-structured and unstructured data. Moreover, when it’s leveraged on the enormous datasets required for speech, image, or even video analysis, it provides these benefits at scale at speeds equal to modern business timeframes.

Hybridization
Although this unsupervised aspect of deep learning is one of its more renowned, it’s important to realize that deep learning is actually an advanced form of classic machine learning. As such, it was spawned from the latter despite the fact that its learning capabilities vastly exceed those of traditional machine learning. Nonetheless, there are still enterprise tasks which are suitable for traditional machine learning, and others which require deep learning. “People are aware now that there’s a difference between machine learning and deep learning, and they’re excited about the use cases deep learning can help,” Razorthink VP of Marketing Barbara Reichert posited. “We understand the value of hybrid models and how to apply both deep learning and machine learning so you get the right model for whatever problem you’re trying to solve.”

Whereas deep learning is ideal for analyzing big data sets with vast amounts of variables, classic machine learning persists in simpler tasks. A good example of this fact is its utility in data management staples such as data discovery, in which it can determine relationships between data and use cases. “Once the data is sent through those [machine learning algorithms] the relationships are predicted,” commented Io-Tahoe Chief Technology Officer Rohit Mahajan. “That’s where we have to fine-tune a patented data set that will actually predict the right relationships with the right confidence.”

Data Science
An examination of the spending on AI companies and their technologies certainly illustrates a prioritization of deep learning’s worth to contemporary organizations. It directly impacts some of the more sophisticated elements of AI including robotics, computer vision, and user interfaces based on natural language and speech. However, it also provides unequivocally tangible business value in its analysis of unstructured data, sizable data sets, and the conflation of the two. Additionally, by applying these assets of deep learning to common data modeling needs, it can automate and accelerate certain facets of data science that had previously proved perplexing to organizations.

“Applications in the AI space are making it such that you don’t need to be a data science expert,” Wilde said. “It’s helpful if you kind of understand it at a high level, and that’s actually improved a lot. But today, you don’t need to be a data scientist to use these technologies.”

Source by jelaniharper

Dec 06, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Correlation-Causation  Source

[ AnalyticsWeek BYTES]

>> The Last Layer of Cyber Security: Business Continuity and Disaster Recovery with Incremental Backups by jelaniharper

>> Boeing creates data analytics group by analyticsweekpick

>> How the Right Loyalty and Operational Metrics Drive Service Excellence – Webinar by bobehayes

Wanna write? Click Here

[ NEWS BYTES]

>>
 GSA Releases Customer Experience Playbook – Nextgov Under  Customer Experience

>>
 Global Healthcare Financial Analytics Market expected to reach at an Extensive rate through Growth analysis – RBTE Report Under  Financial Analytics

>>
 TTEC Announces Global Launch of Humanify™ Insights Platform – Directors Club Newswire (press release) (blog) Under  Sales Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Introduction to Apache Spark

image

Learn the fundamentals and architecture of Apache Spark, the leading cluster-computing framework among professionals…. more

[ FEATURED READ]

The Future of the Professions: How Technology Will Transform the Work of Human Experts

image

This book predicts the decline of today’s professions and describes the people and systems that will replace them. In an Internet society, according to Richard Susskind and Daniel Susskind, we will neither need nor want … more

[ TIPS & TRICKS OF THE WEEK]

Fix the Culture, spread awareness to get awareness
Adoption of analytics tools and capabilities has not yet caught up to industry standards. Talent has always been the bottleneck towards achieving the comparative enterprise adoption. One of the primal reason is lack of understanding and knowledge within the stakeholders. To facilitate wider adoption, data analytics leaders, users, and community members needs to step up to create awareness within the organization. An aware organization goes a long way in helping get quick buy-ins and better funding which ultimately leads to faster adoption. So be the voice that you want to hear from leadership.

[ DATA SCIENCE Q&A]

Q:Which kernels do you know? How to choose a kernel?
A: * Gaussian kernel
* Linear kernel
* Polynomial kernel
* Laplace kernel
* Esoteric kernels: string kernels, chi-square kernels
* If number of features is large (relative to number of observations): SVM with linear kernel ; e.g. text classification with lots of words, small training example
* If number of features is small, number of observations is intermediate: Gaussian kernel
* If number of features is small, number of observations is small: linear kernel

Source

[ VIDEO OF THE WEEK]

@AnalyticsWeek Panel Discussion: Finance and Insurance Analytics

 @AnalyticsWeek Panel Discussion: Finance and Insurance Analytics

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

In God we trust. All others must bring data. – W. Edwards Deming

[ PODCAST OF THE WEEK]

Understanding #FutureOfData in #Health & #Medicine - @thedataguru / @InovaHealth #FutureOfData #Podcast

 Understanding #FutureOfData in #Health & #Medicine – @thedataguru / @InovaHealth #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Poor data can cost businesses 20%–35% of their operating revenue.

Sourced from: Analytics.CLUB #WEB Newsletter