💾 Archived View for dioskouroi.xyz › thread › 29439117 captured on 2021-12-05 at 23:47:19. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2021-12-04)

🚧 View Differences

-=-=-=-=-=-=-

Rule-Based Expert Systems (1984)

Author: mindcrime

Score: 57

Comments: 19

Date: 2021-12-04 06:19:15

Web Link

________________________________________________________________________________

KineticLensman wrote at 2021-12-04 11:10:35:

I did some work in the early 1990s on a rule-based Expert System Builder funded by the EU's Esprit research programme. It was implemented in Common Lisp, hosted on Lisp machines and then Macintosh systems, and had a frame-based knowledge representation.

The approach had various problems at the time:

We found it telling that none of the contributing companies continued to use the rule-based approach for very long when the EU funding dried up, and that none of them successfully commercialised it for a third party company. However, as an experienced programmer rather than an AI researcher, in at least one case, I found that the rule-based approach was a good way of getting domain experts to articulate the heuristics they used, and then embed these in a conventional application. The GUI we developed for the frame-based component was also repurposed for a very successful in-house modelling too, so the codebase wasn't a complete loss.

j-pb wrote at 2021-12-04 12:05:28:

> * what to do with a chain of reasoning if a previously asserted fact was retracted (i.e. when being questioned by the system, you realise that a previous answer was wrong)

Mixing up the concepts of "forgetting" (a monotonic operation that maintains a systems consistency at the cost of completeness), and "retraction"/"deleting" (a non-monotonic operation that requires truth maintenance and chooses completeness potentially at the cost of corectness (of the whole system)), is the billion dollar NULL mistake of the knowledge representation and database world.

PaulHoule wrote at 2021-12-04 15:38:29:

Many symbolic A.I. programs (e.g. MYCIN) were based on production rules engines. Great progress has been made in rules engines since then, but production rules haven't gotten mainstream and I've long wondered why.

Retraction is largely solved. Rules engines such as Drools and Clara use the RETE algorithm, hashtable/b-tree indexes, can match against millions of rules. These work particularly well for "complex event processing" where you could could gather up events (say from a rocket starting up or a financial trade) into compound events you code against.

People who use this kind of tool

http://www.hets.eu/

would say their tools are much better and that the "logic" of production rules is a pale shadow of what SAT solvers, theorem provers, etc. can do.

For ordinary "Business Rules", production rules engines compete against implementing the rules in ordinary code.

http://inform7.com/book/WI_1_1.html

is a toolkit for writing text adventure games in what looks like English. Just blasting through rules in order it uses defaults as a primary mechanism of control.

Production Rules and logic compete with databases in the sense that a more expressive system won't scale as well (you want to fit in the L3 cache if not the RAM) RDFS and OWL implement limited-but-useful subsets that scale better than others.

I've used production rules in data transformation one record at a time, they work well for this, but today I implement the same organizational principles in a set of Python functions that act on dicts. The production rules do

https://en.wikipedia.org/wiki/Transitive_closure

idiomatically but in the limited cases where you need that you can look it up in an algorithm book.

Production rules also are great for highly interactive systems.

I have used production rules in the control plane for a distributed data processing system where the system compiled a description of a job into a set of rules.

They are great for "internet-of-things" both because "IF-THIS-THEN-THAT" feels natural and the compound event capability.

It seems that they'd even be an answer to the asynchronous communication problem in Javascript.

HerrMonnezza wrote at 2021-12-04 20:47:54:

What literature/books would you recommend to know more about the subject to be able to apply it to practical programs?

leeuw01 wrote at 2021-12-05 15:57:39:

+1, I'm currently investigating rule-based systems at my job and I would love to understand the theory behind them better.

blankfrank wrote at 2021-12-04 17:41:34:

For what it’s worth, I worked for an insurance company 88-96 and was expert systems developer 001. Our expert system ran on PCs in our 30 branches then migrated to a mainframe. If you wanted serious reliability that was where it was at. Big blue. Anyway, I loved knowledge engineering, on call support and deep into it development (aionds). Captured $4 million in additional revenue as reported by the underwriting department. Ran for 6+ years total then I left) and then someone wanted “greater ROI”. They cut 1/3 of the underwriting staff. People I interviewed, took customer support calls from, learned about them as people, laughed with, etc. I didn’t feel so proud of my “accomplishments” afterwards. And anyone who thinks that rule-based expert systems, with all their flaws, were experimental-grade stuff, you haven’t taken a support call at 3am because YOUR hand crafted rule (about 1000 total) failed and it’s costing your employer money. Now we have deep neural nets that effectively are unexplainable (been there and feeling the pain).Consider the DARPA XAI retrospective report to back up my claim. What a long strange inference it’s been.

popcube wrote at 2021-12-05 03:07:50:

This explain why sometimes people don't like like to offer their knowledge. He will let all people in his region starve, then he got phone call abuse him at 3 a.m. finally, he will be fired because he is useless.

zyl1n wrote at 2021-12-04 06:47:18:

First sentence of chapter 1: "Artificial Intelligence (AI) is that branch of computer science dealing with symbolic, nonalgorithmic methods of problem solving. "

Interesting how it's almost opposite of what the field is now.

mindcrime wrote at 2021-12-04 08:32:17:

_Interesting how it's almost opposite of what the field is now._

That's exactly what makes this stuff interesting. People sometime seem to be struck by something you might call "path dependence blindness" where they see the path taken as the only path that might have been taken, and assume that anything which is off that precise path is useless. However, it it sometimes useful to look back and ponder those forks in the path which were not chosen, or which were abandoned (perhaps prematurely).

Can you take an old idea from something like MYCIN and apply it in a new way, combined with other, newer ideas? Or maybe find something that was a good idea in principle, but was "ahead of its time" in regards to computing power or some other constraint? Maybe, maybe not. But it's exploring those kinds of things that always leads me back to stuff like this.

theK wrote at 2021-12-04 08:11:24:

We’ll this is from the 80s right? Iirc, the switch of symbolic to ML oriented solutions in Ai really hit mainstream around the late 00s

galaxyLogic wrote at 2021-12-04 08:42:17:

Right, but is Machine Learning used to do medical diagnosis?

triska wrote at 2021-12-04 09:50:56:

Very interesting!

Especially in sectors that are heavily regulated, such as medicine, legal professions, and also the public sector, automated rule-based reasoning can be very useful and helpful.

It is particularly interesting that the system can produce _explanations_ for _why_ something was done or recommended. For example, chapter 19 discusses explanations for _dosage selection_:

https://people.dbmi.columbia.edu/~ehs7001/Buchanan-Shortliff...

It is also interesting that the _same_ inference mechanism can be applied also in different contexts. Quoting from page 296:

_"The flexibility needed by MYCIN to extend or modify its knowledgebase was exploited in EMYCIN. Neither the syntax of rules nor the basic ideas underlying the context tree and inference mechanism were changed."_

Sample interaction, using custom rules for reasoning in business-related contexts, shown on page 300:

20-Oct-79 14:16:48

    -------- COMPANY-1 --------

    1) What company is having a problem?

    ** IBM

    2) Is the problem with payroll or inventory?

    ** PAYROLL

    3) What is the number of employees of ibm?

    ** 10000000

    Conclusions: the tools to use in solving the problem are as follows: a large computer.

There are so many applications areas where the inference rules are already _known_ and need not be _guessed_ from data. In such cases, where the challenge is to automate the existing _known_ rules, I hope that similar formalisms can help us build better and more reliable software.

denton-scratch wrote at 2021-12-04 15:04:14:

> the system can produce explanations

That is the killer edge that rule-based systems can offer. You can question them about how they reached their conclusions; armed with the response, you can evaluate their reasoning and critique their conclusions.

Explanations aren't beyond the reach of ML systems _in principle_, AFAIAA, but I've yet to hear of any proposal about how one might construct a self-explaining ML system.

[Edit] I guess not being able to explain your reasoning might be considered an advantage, in some circumstances; if you can't show that a certain system is using defective reasoning (because it won't show you its reasoning), you can't show the system is defective, so it becomes harder to sue the manufacturer or relying party.

idealmedtech wrote at 2021-12-04 15:52:54:

We're a medical device startup using a KB/RBR system for glucose control, and our greatest feature from a traceability/regulatory perspective is the ability to produce a report for each and every decision made; why we made that decision based on what we know, and how that decision affects the output of the system. This can also be applied retroactively; so all we need to know is the version of the system employed and the sensor inputs given and we can fully replay the decision process.

This is something that, as you state, you simply cannot do with a deep learning approach. Regulators don't love black boxes, especially when it comes to human lives.

Is it highly specific and non-transferable? Yes. Did it take a decade of work to refine? Yes. But does it get the best results, even better than classical MPC/PID approaches? So far, yes. In medicine, it's results that matter, not reusability of the tech employed. Human trials are just around the corner, and we can find out with certainty if our approach gives the results we think it will.

YeGoblynQueenne wrote at 2021-12-04 10:25:59:

MYCIN was the first AI system to outperform human experts. From wikipedia (because I haven't found that information in the posted book yet):

_Research conducted at the Stanford Medical School found MYCIN received an acceptability rating of 65% on treatment plan from a panel of eight independent specialists, which was comparable to the 42.5% to 62.5% rating of five faculty members.[5]_

https://en.wikipedia.org/wiki/Mycin#Results

mamp wrote at 2021-12-04 13:06:43:

Actually I think de Dombal’s work showed better than expert performance in 1974 (one of the references in the book):

de Dombal, E T., Leaper, D. J., Horrocks, J. C., Staniland, J. R., and McCann, A. E 1974. Human and computer aided diagnosis of abdominal pain: Further report with emphasis on the performance of clinicians. British Medical Journal 1: 376-380.

YeGoblynQueenne wrote at 2021-12-04 17:13:12:

Thanks- I haven't read that paper - or, to be fair, any of the MYCIN papers. I only knew this from ... word of mouth, I think? Let's put it down to "lore".

I'll read the paper and revise my knowledge- thanks again!

matheusmoreira wrote at 2021-12-04 10:35:14:

> This study is often cited as showing the potential for disagreement about therapeutic decisions, even among experts, when there is no "gold standard" for correct treatment.

Maybe an evaluation of first line treatments for hypertension and diabetes would have been better.

mamp wrote at 2021-12-04 13:00:26:

Interestingly Mycin was used to demonstrate the limitations of rule based inference and with trying to handle uncertainty in Rule based systems by two of Shortliffe’s students in the fantastic paper “The myth of modularity” [1] which argued for probabilistic approaches to reasoning.

1.

https://arxiv.org/abs/1304.3090