With so many recent headlines pertaining to advanced machine learning and deep learning, it may be easy to forget that at one point in the lengthy history of Artificial Intelligence, the term largely denoted relatively simple, rules-based algorithms.
According to TopQuadrant CEO Irene Polikoff, âItâs interesting, AI used to be synonymous with rules and expert systems. These days, it seems to be, in peopleâs minds, synonymous with machine learning.â
In contemporary enterprise settings, the latter is applauded for its dynamic mutability while the former is derided for a static rigidity not, so the argument goes, emblematic of truly intelligent systems. If humans are devising the rules, is it truly AI?
Nonetheless, certain aspects of rules-based, âalgorithmic AIâ persist in part because of their applicability to different use cases, in addition to machine learningâs shortcomings. The most notable is the âblack boxâ phenomenon (highly prevalent in facets of unsupervised learning and deep learning) in which the results of machine learning models are difficult to explain.
A closer examination of the utility and drawbacks of each approach indicates that in many cases pertaining to automation, the two balance each other for explainable, trustworthy intelligent systems and solutions.
Machine Learning Algorithms
Machine learning algorithms are widely acclaimed for their automation capabilities, which have produced palpable business value for data management and data engineering mainstays for some time now. However, they also deliver the same results for specific facets of data governance. When ensuring that captured data conforms to business glossary definitions for consistent reuse throughout the enterprise without ambiguity, itâs useful to automate the tagging of data in accordance to those normative terms and business concepts. Machine learning is an integral means of automating this process. For example, when using what Polikoff referenced as âcontrolled vocabulariesâ to tag documents stemming from content management systems for regulatory compliance or other governance needs, âmachine learning is used to find the most right placed term that applies to documents,â Polikoff revealed.
Human in the Loop and Explainability
There are two critical considerations for this (and other) automation use cases of supervised machine learning. The first is that, despite the fact that certain machine learning algorithms will eventually be able to readily incorporate the results of previous results to increase the accuracy of future results, the learning is far from autonomous. âThere is some training involved; even after you train thereâs users-in-the-loop to view the tags and accept them or reject them,â Polikoff mentioned. âThat could be an ongoing process or you could decide at some point to let it run by itself.â Those who choose the latter option may encounter the black box phenomenon in which thereâs limited explainability for the results of machine learning algorithms and the models that produced them. âWith machine learning, what people are starting to talk about more and more today is how much can we rely on something thatâs very black box?â Polikoff said. âWho is at fault if it goes wrong and there are some conclusions where itâs not correct and users donât understand how this black box operates?â
Conversely, thereâs never a lack of explainability associated with rules-based AI in which humans devise the rules upon which algorithms are based. Transparent understanding of the results of such algorithms are their strength; their immutability is often considered their weakness when compared with dynamic machine learning algorithms. However, when attempting to circumscribe the black box effect âto some extent rules address them,â Polikoff maintained. âThe rule is clearly defined; you can always examine it; you can seek it. Rules are very appropriate. Theyâre more powerful together [with machine learning].â The efficacy of the tandem of rules and machine learning is duly demonstrated in the data governance tagging use case, which is substantially enhanced by deploying a standards-based enterprise knowledge graph to represent the documents and their tags in conjunction with vocabularies. According to Polikoff, âyou can have from one perspective a controlled vocabulary with some rules in it, and from another perspective you have machine learning. You can combine both.â
In this example machine learning would be deployed to âfind [the] most likely tags in the document, look at the rules about the concepts those tags represent, and add more knowledge based on that,â Polikoff said. Implicit to this process are the business rules for the terms upon which the tags are created, which helps define them. Equally valuable is the knowledge graph environment which can link the knowledge gleaned from the tagging to other data, governance concepts, and policies. The aforementioned rules, in the form of vocabularies or a business glossary, aggrandize machine learningâs automation for more accurate results.
The mutable nature of machine learning algorithms doesnât mean the end of rules or the value extracted from rules-based, algorithmic AI. Both can work simultaneously to enrich each otherâs performance, particularly for automation use cases. The addition of rules can increase the explainability for machine learning, resulting in greater understanding of the results of predictive models. When leveraged in linked data settings, thereâs the potential for âa combination of machine learning and inferencing working together and ultimately, since both of them are using a knowledge graph for the presentation of the knowledge and the presentation of the data, that makes for clarity,â Polikoff remarked. âItâs quite a smooth and integrated environment where you can combine those processes.â