Should you store business rules in your Enterprise Knowledge Graph (EKG)? This is a non-trivial question, and how you answer it might change your enterprise knowledge graph strategy.
A few weeks ago, I wrote an article on the incredible new Intel PIUMA chip architecture and how it will change the face of computing by offering a 1,000x improvement in knowledge graph traversal performance. These chips could be manufactured at a low-cost and integrated into many devices and portend many changes to the computing industry and software design. This is not just the end of the relational database era. It can also single sweeping changes in other large segments of the software industry: enterprise rules engines, workflow systems, validation engines, and many other areas related to the movement of data and management of data quality.
This article will review what enterprise rules engines are, review a few taxonomies of rules, and then speculate on the impact that hardware graphs will have on these systems. This analysis will help us determine which of our enterprise systems can benefit from storing and executing rules in our enterprise knowledge graphs.
What is an Enterprise Rule?
Every time we write a conditional expression in our code in the form of an IF/THEN/ELSE statement, we are creating a rule. But the scope of our rule may vary from just the context of a single web page all the way up to having an impact on all our customers and our entire organization. For the purpose of our discussion here, we will define an enterprise rule as any rule that has a scope outside a single project. Enterprise rules may have some of the following characteristics:
- Enterprise rules have a scope that can impact two or more projects.
- Enterprise rules should be stored in a format that can be reused in multiple projects in multiple rules engines. One example of this standard format is the Decision Model Notation (DMN) standard. Any DMN rule can be converted into a decision tree can be represented in a graph where the conditional is a vertex, and the TRUE and FALSE branches are edges.
- Enterprise rules should be stored in a place that can be searched and reused. Remember, we can’t reuse what we can’t find.
- Enterprise rules should have documentation that has been reviewed and approved by two or more people. This often overlaps data stewardship functions.
- Enterprise rules should be testable by anyone, so they often come with test cases.
- Enterprise rules may have a name or label that has a registered meaning outside of a small project. These names help everyone discuss these rules with precision.
- Enterprise rules should reference concepts that have shared meaning outside of an individual project. These concept references can be stored as links to a centralized database of concepts that have a low probability of having different meanings in different contexts, and the concept meaning will not change over time (semantic drift).
- Some enterprise rules are critical enough that you want to know where they are being used (in what applications), how often they are being executed, and what the impact of a rule change might be for your enterprise. The relationship between rules and the places that use them form a dependency graph.
- The evidence of rule execution is often buried deep within log files. Organizations with good logging standards make it easier to find this data, but it still takes time and resources.
- Enterprise rules might be connected to other enterprise rules in complex structures called ontologies. Rule dependency graphs must include this structure.
Anyone that has studied semantics might realize that whenever we talk about enterprise rules that are connected to registered concepts, and these concepts are in turn connected to other rules, they might see that enterprise rules are indistinguishable from a knowledge graph. People who write rules engines don’t claim to be building knowledge graphs, but you can’t really build them to be shared without connected knowledge. These connections are key factors in our ability to find and reuse related rules.
Note that when we use the phrase “registered concepts,” we are referring to a centralized place where concepts are added to an enterprise database with a focus on deduplication. Humans curate each concept, and multiple references to the same concept are merged together. For a good background on this topic, the reader is encouraged to read the concepts behind the ISO/IEC 11179 Metadata Registry.
Bottom line: Rules can be represented as graphs, and enterprise rules engines can be enterprise knowledge graphs. Connecting rules is a big deal.
Are Business Rules Code or Data?
One of my favorite discussion points for business units that are considering the migration to graph databases is to ask them a question that reveals their perspective on rules. I ask them, “are business rules code or data?
If then answer that business rules are code, we know that they may have one or more people maintaining procedural code such as Java, C, Python, Ruby, or Kotlin. Their mindset is that software developers create and maintain rules.
But if they answer that their business rules are data, we know they have a more enterprise-scale understanding of the creation and maintenance of business rules. They understand how business rules can be stored in a database, queried, extracted, maintained, as reused as reusable software components. They might also discuss what rules are the most important to their business, how they are curated, how often they are executed, and how they are related to other rules. The key here is do they see how rules are related to other things.
The more a business unit see the relationships between rules and the external world, the better they are willing to see the benefits of storing rules in an enterprise knowledge graph.
Neither of the code vs. data answers is wrong. But the answer reveals where the business units are in their evolution of business rules as reusable assets and the fitness of EKGs to store complex rule relationships.
Rule Taxonomies
Next, let’s analyze what types of rules should be stored in enterprise knowledge graphs and executed on the next generation of 1,000x graph hardware.
I should preface this discussion with the acknowledgment that rule taxonomies vary widely based on who you are talking to and the context of your discussion. For a great book on this topic, you might enjoy George Lakoff’s book Women, Fire, and Dangerous Things.
Data Validation Rules — These are rules that validate individual data elements or groups of data elements in a semi-structured document such as an XML or JSON file. A classical example is using an XML Schema to validate an XML file. Validation rules can return a Boolean TRUE/FALSE as well as a list of the rules that did not return true in the document. Schema validation is a mature technology with many easy-to-use GUI tools like the oXygen XML Schema editor. This allows non-programming staff to create and maintain even complex rules.
There are W3C standards for these rules, making it more difficult for vendors to trap you into their proprietary solutions. Using standards allow your rules to stay portable across many systems and can be stable over time. Smart IT departments understand leverage these rule-based standards despite vendors that try to convince you to use their proprietary formats.
Data type validation is also a built-in assumption in strongly typed programming languages. Quickly detecting a type mismatch in a function call allows us to isolate errors quickly, give users precise feedback, and is a part of the “fail fast” philosophy in software engineering.
Validation rules are usually self-contained entities that compare data elements with patterns. They don’t often need to reach into customer data to be executed. So they are not often considered candidates for conversion into decision trees of a knowledge graph.
Graph Quality Rules and SHACL
Graphs also have their own unique quality tests related to how the structure or shape of data around a vertex can determine its quality. Although XML Schemas and Schematron rules are good at expressing quality rules expressed as XPath expressions, they often don’t look outside a document to do their work. To express these rules for a modern LPG graph, we can look to mature RDF-driven graph rules called Shape Assertion Constraints. The Shape Assertion Constraint Language or SHACL standard is a great way to understand the complexities of expressing these rules in an enterprise knowledge graph.
Calculation Rules — These are rules that calculate numeric values. For example, an e-commerce call center can prioritize inbound phone calls based on the predicted lifetime value (LTV) of a customer. High LTV customers might get routed to more senior call center staff with expertise in their specific purchasing profile. The calculation rules should be consistent across a line of business and be resilient to work on multiple types of customer data. For the call routing example, they need to run quickly to not delay the caller.
One variation of a calculation rule is a similarity score rule used in recommendation engines. Given a customer, can we calculate the most similar customer (from a pool of 1 million customers) in 100 milliseconds? FPGA hardware is ideal for these types of calculations.
Classification and Summarization Rules — These are rules that put items into or more groups or classification taxonomy. They can be simple IF/THEN rules or complex rules of nested conditions that have many AND/OR statements that return the classification of any item. Classification rules can also be applied to entire documents and return a list of the concepts that are being mentioned within a document. Machine learning tools like GPT-3 have proved ideal for document classification and summarization.
Transformation Rules — These are rules used to transform one type of data into another type of data. For example, you might want to extract a serialized RDF representation of a graph and convert it into a JSON representation. Transformation rules can be complex, and languages such as XPATH can be used to find patterns within documents using path expressions that contain complex wildcards. Databases often export data in one structure that must be transformed into other structures using a process called Extract, Transform, and Load or ETL. Automating the creation of transformations with machine learning is an active area of AI research called Schema Matching and Schema Mapping.
Workflow Rules — These are rules that are the decision points in long-running transactions that involve humans. They are rules that transition a business event from one state to the next until a task is considered complete. For example, a publishing workflow might require an editor to approve a document by doing a grammar and spelling check of a document. Once they press an “Approve” button, the rules allow a document to appear on a public website. Workflow editors often allow rules to be reused using graphical editors such as BPMN editors and executed on workflow engines that execute these rules and update the state of a task.
Business Definitions — Not everyone thinks of business definitions as being a type of rule. Most business glossaries just focus on creating a registry of terms for each project and writing their ISO definition. But definitions are rules that bind a label with a meaning. They are rules that indicate what each business unit uses as their preferred label for a concept and what alternate labels form synonym rings of a concept. Business glossaries often start out with a simple structure and then evolve into a concept taxonomy and eventually, a concept graph or ontology.
I have worked on many projects with a business analyst sending me a spreadsheet of their terms and definitions. These terms are then grouped together and converted into a hierarchy where they become a formal taxonomy of concepts. Taxonomies can then have complex relationships that form a graph. That is an ontology.
Inference Rules — These are rules that take an existing graph and look for missing links in the graph. The W3C defines inference as discovering new relationships. Just like there are workflow engines, there are also inference engines that encode their rules in structures such as RDF, RDFS, and OWL. These rules can be stored in simple text files or they can be stored in a graph.
Discriminatory Rules — These are rules that take complex sets of unstructured data such as images or text and then discriminate between classes of objects. These rules tend to learn the boundary between classes/labels in a dataset. For example, give a set of photos of cats and dogs, classify which photos are cats and which ones are dogs. The result is usually a probability score that the item contains a class — for example, there is a 90% probability that a given photo contains a dog. Discriminatory rules can overlap classification rules, but they are usually associated with machine learning tasks on unstructured data. Most discriminatory rules are implemented in neural networks such as deep neural networks.
Generative Rules — These are rules that understand how to generate the details of an object, even when only a simple summary is given. Think of everything to a field in a form that has an “as-you-type” autocomplete to the automatic suggestions generated by g-mail to the generative text generated by Transformers such as BERT.
Bottom Line: There are many different types of rules, from simple to complex. There will not be a simple rule to determine which types of rules should be stored in an enterprise knowledge graph.
Deterministic vs. Statistical Rules
There are also two different ways that rules are created, maintained, and executed.
- Deterministic rules: where the result is pre-determined to be consistent, and no random functions are used to calculate their output and are easy to explain. Many areas, such as healthcare and clinical decision support strongly prefer rules that can be explained.
- Probabilistic rules: where we use random or stochastic functions, statistics, and machine learning to create complex rules that can vary over time.
Most of the IF/THEN/ELSE rules that are created and maintained by humans are called deterministic rules because the results of the rules are clearly determined to be consistent over time. Given the same input to these rules, you should always get the same output.
Probalistic rules can be much more complex. They are often executed in the form of a neural network that can be extremely large. It is not uncommon to have neural networks that are millions or billions of parameters. The GPT-3 system uses 175 billion parameters.
TBox and ABox Rules
Many of you that have studied symbolic AI in the past are familiar with a taxonomy for rules that distinguished between terminological components (known as TBox rules) and assertion rules or ABox rules — assertions about instances in a knowledge graph. Although most AI people I work with have never heard of this rule taxonomy, I still think there is wisdom to get gained about separating rules into these categories.
Terminology components tend to be universal rules about the world and how things are related. Most items in glossaries, taxonomies, and ontologies are Tbox rules. I always imagine my knowledge graphs having discrete subgraphs for storing Tbox rules. Tbox rules should have global read and execute permissions in a knowledge graph since they usually don’t need direct access to customer data. Reference data management (codes and their semantics) also fits into Tbox rules.
ABox rules, on the other hand, are tightly commingled with customer data. To execute, they need to have access to the vertices and links about every customer. So their access control is tied to customer data access.
ABox rules also have the property that they must conform to all the TBox rules. So TBox rules are often thought of as “rules about rules.” As a consequence, TBox rules need more governance and review because their changes impact other rules. TBox rules, on the other hand, might only impact a small group of customers within a single business segment. They don’t need the same chance control procedures.
The problem with the TBox/ABox classification scheme for rules is that there is really no precise definition of where to draw the line. There are many shades of gray. Rules that live higher in an ontology (called the upper ontology) need to be very stable and controlled. Rules in the middle ontologies need some controls, but not as many. Rules at the bottom of an ontology are often administered by an administrative staff that uses administrative-tools like spreadsheets to keep the low-level rules up to date. Adding a new drug or medical device to a medical ontology is a good example of these low-level rules.
Which Rules Should Execute on Graph Hardware?
So now that we see an incredible diversity of enterprise rules and how they are used to solve business problems, let’s ask ourselves which of these would benefit from a 1,000x speedup on graph hardware? This is a complex problem, and there is no single answer for all rules. Here are some considerations:
Rule Containment: How many remote nodes in a graph or remote services need to be accessed by a rule. Simple rules that check against string patterns are very self-contained. Rules that need to validate an ID in a remote system are not contained, and many depend on remote services that have low service levels. These complex rules may need additional timeout rules when remote systems are down.
Rule Performance: How quickly does the rule need to run? Is there a person waiting for the rule to run? Is there a web page that could render faster if the rule is run 1,000 faster? These are all indications that you should consider migrating the rules run in an enterprise knowledge graph.
Input Scope: How many inputs are needed to create the output of this rule? How difficult is it to calculate the inputs?
Rule Complexity: Do rules depend on other rules? Could these rules be configured as pointer hopping over a graph?
Conversion to Decision Tree Formats: How easy is it to convert these rules into and standard decision tree format such as Decision Model Notation (DMN).
Frequence of Execution: How often does a rule get executed? The more often rules run, the more we think about the benefits of converting to efficient pointer hop execution.
Rule Execution as a Service: Many of these rules can be quickly and efficiently be executed as an external service. If the rules require parallel execution, an FPGA might be a low-cost way to execute rules such as similarity calculation rules.
For many systems, such as validation rules, they need to run every time data comes into or goes out of systems. They are designed to run using in-memory execution steps, and it may not make sense to put them in a knowledge graph.
When Data Access Blocks Real-Time Rules
I think the key insight is to understand how difficult it has been to build rules engines that had good performance characteristics. Image a decision tree with 25, 50, or 100 branches. Each branch required a system to run an SQL query in an RDBMS system and wait for the result to be returned. Running simple extract rules on some systems would cause CPUs to max out, network traffic to spike, and disk access to hit their limits.
In knowledge graphs, we don’t have these problems. Everything is in RAM, everything can be accessed with simple point hopping, and the time for any decision is just the time for pointers to hop through memory.
Conclusion
I wish there were a simple set of rules I could give you about when to put your rules directly into your enterprise knowledge graph. Unfortunately, the world of enterprise rule management is still a complex place, and I can’t give you a clear decision tree that will work in all contexts. Your mileage may vary.
My suggestion is to be aware of the fact that there are many benefits to treating your rules as data, making them searchable, making them reusable, and tracking rules execution in a knowledge graph. I hope you keep an open mind about storing your rules in an EKG, understand the tradeoffs, and seek out the advice of experts when you are unsure of your own knowledge base.
As 1,000x graph hardware becomes a commodity over the next few years,, we will see a gradual swing to store enterprise rules in an enterprise knowledge graph.