December 3, 2015

AI memories — expert systems

This is part of a four post series spanning two blogs.

As I mentioned in my quick AI history overview, I was pretty involved with AI vendors in the 1980s. Here on some notes on what was going on then, specifically in what seemed to be the hottest area at the time — expert systems. Summing up:

First, some basics. 

Anyhow:

All combined, the expert system vendors didn’t accomplish much. However, there were a few successes in financial services, famously including credit-decision support at American Express. Airlines adopted the technology fairly vigorously, in areas such as scheduling and aircraft maintenance. There were tries in manufacturing too, including in materials selection (I forget the use case — something to do with composites) and, again, equipment maintenance. In general, a number of application categories — and this fits with the EMYCIN antecedent — could be characterized as having something to do with diagnosis.

The most remarkable expert system story I recall, however, was of something entirely built in-house. At a small conference in 1984 organized by John Clippinger, a guy from United Airlines said that they had built a system for flight pricing, and were gaining over $100 million/year from it. I just assumed he was misspeaking, but other people thought he was serious. Either way, it was a long time before United allowed the subject to be aired in public again.

In contrast, Teknowledge’s standard demo was stunningly trivial — a Wine Advisor, based on about 40 rules (if I recall correctly), selecting a wine to go with your hypothetical meal. When I suggested they develop a more serious demo, they pled resource constraints. This rang alarm bells for me about the difficulty of using the technology; I should have paid more attention to them.

Teknowledge was basically the company that commercialized EMYCIN. In general it was the most hyped-up of the expert system technology companies, with support from the relevant big-name Stanford professors and so on, especially Ed Feigenbaum. They raised a bunch of money (I got my biggest-ever investment banking bonus for helping) and got some visibility, but didn’t do much to overcome the technical problems I highlighted at the start of this post. Jerry Kaplan also got his first commercial experience there.

Intellicorp’s product KEE (Knowledge Engineering Environment, plus the obvious pun that Knowledge is Key) was more in the vein of STEAMER. The canonical KEE demo was what we’d now call a simple real-time BI dashboard — with dials and so on, so the dashboard metaphor could be taken pretty literally.* Intellicorp later pivoted from expert systems to object-oriented programming, and that was frankly a better architectural fit. Ed Feigenbaum’s name is also associated with them, but frankly I remember them more as being folks out of Texas Instruments (which had some AI efforts in the 1970s).

*Even so, KEE wasn’t used for much in the way of database query. I’ve now forgotten why.

Intellicorp also knew how to have fun. COO Tom Kehler led conference after-party sing-a-longs with his guitar. Workstations were named after famous disasters — Tacoma Narrows Bridge, Crash of ’29, Apollo 13 and so on. (The latter was said to have confused their Apollo salesman.) Managers put their desks in hallways, defying anybody who still had an office to complain about cramped quarters.

Inference Corporation marketed its rules engine ART on the strength of allegedly superior performance, because it was written in C and because it relied on the forward-chaining RETE algorithm rather than EMYCIN’s back-chaining. Sometime after they started telling the performance story, it actually became true. 🙂 Even Inference didn’t get much out of the inference engine market, however, and eventually the product pivoted (unsuccessfully) to general object-oriented app development, while the company also pursued an effort in case-based reasoning.

The glossary to ART’s documentation is the first place I saw the entry

Recursion. See recursion.

I later stole that joke for my 1990s book on application development tools.

I had little contact with Carnegie Group — I don’t get very often to Pittsburgh — but I think it wound up focusing on the manufacturing sector.

Two other expert system companies are perhaps worth a mention:

 And I think with that I’ll finish this post. If there’s enough interest, I can write up more information later.

Comments

8 Responses to “AI memories — expert systems”

  1. Historical notes on artificial intelligence | Software Memories on December 3rd, 2015 1:01 am

    […] One post specifically covers the history of expert systems. […]

  2. What is AI, and who has it? | DBMS 2 : DataBase Management System Services on December 3rd, 2015 1:02 am

    […] One post specifically covers the history of expert systems. […]

  3. Machine learning’s connection to (the rest of) AI | DBMS 2 : DataBase Management System Services on December 3rd, 2015 1:02 am

    […] One post specifically covers the history of expert systems. […]

  4. Jerry Leichter on December 3rd, 2015 8:45 pm

    R1 traced back to work done at CMU. DEC funded some research in the late 1970’s by John McDermott on a system internally called XCON, for eXpert CONfigurator. It configured VAX 11/780’s – a complicated job, because options had various prerequisites, there were space and power constraints that interacted in complicated ways, there were “silly” but important things like fillers that would pass signals through unused slots – forget to include them in an order and the machine wouldn’t work.

    XCON was written in OPS4, an expert system development language that was itself implemented in LISP. It had many tens, probably hundreds, of rules, and was considered a great success for expert system technology.

    I became involved in transferring the project from a research model to in-house use. I first got the sales pitch – all about what a great idea rule-based systems were and how they allowed you to express expert knowledge in a simple way.

    I later got to see what was behind the curtain. For example, the development cycle looked like this: McDermott would visit DEC, talk to an expert, and learn some new rules. He’d add them to the existing rule base – and the system would go into a loop when run. (This was not supposed to be possible – the algorithms supposedly picked “the best rule” to run at any time, and that rule would change the system so that it would not be required again.) Hours would be spent in figuring out – using completely inadequate debugging tools – where the problem was coming from, tweaking to break the loop – then repeating for the next loop. Eventually, the thing settled down – and it would be time for the next update.

    To help control looping, there was a magic “T” attribute that got incremented by rules as they ran, and which they all depended on. T amounted to a clock imposed on the rules, forcing them to run in a pre-determined order. This effectively disabled the great promise of expert systems: That they *automatically* picked the best rules to apply, with all rules potentially available at any time.

    Meanwhile, it turned out that the hardest parts of the problem weren’t even being solved by the expert system. It was possible for rules to escape into LISP – and in fact configuring Unibus adapters escaped into complicated, ad hoc LISP code.

    Later, OPS4 was replaced by OPS5 – a new implementation that ran on VAXes and wasn’t based on LISP. Porting XCON from OPS4 to OPS5 was, needless to say, a much harder job than initially expected.

    Despite these issues, XCON proved quite useful, and DEC developed a sizable internal group doing expert systems work. I don’t know what else came out of that group – I lost contact with them fairly quickly. But I’ll bet they seeded some of the later commercial efforts.

    My own take on expert systems was that they had actually developed a nice formalism and methodology for “knowledge extraction”: Taking the implicit knowledge that experts develop through experience and getting it down on paper. As a way to actually *implement* that knowledge on a computer … that was not nearly as big a success.

    In the mid-1990’s, I had an indirect encounter with what may have been the last gasp of rule-based systems: Attempts to diagnose the causes of failures in complex IP networks. It’s not that the approach didn’t work at all. But it ran into multiple failings, both in performance (the best of these systems could only handle a couple of updates to network state a second, which was orders of magnitude too slow to be useful in any real network) and in terms of “fragility”: A classic problem with expert systems which “didn’t know what they didn’t know” and would give completely absurd answers when given problems that went slightly beyond their knowledge base.

    At the time, I worked for a company called Smarts (eventually acquired by EMC) which used an entirely different technology to diagnose failures. Early on, we saw some competition from expert systems – but they crumbled quickly in the face of real networks that we managed with ease.

    — Jerry

  5. Artificial Intelligence In Drug Discovery: A Bubble Or A Revolutionary Transformation? on August 3rd, 2017 8:05 pm

    […] the early 1980s, the interest was regained due to the creation of AI-based “expert systems,” which were quickly adopted worldwide. The […]

  6. Notes on artificial intelligence, December 2017 | DBMS 2 : DataBase Management System Services on December 12th, 2017 1:53 pm

    […] AI based on knowledge representation usually doesn’t. (Examples: IBM Watson, 1980s expert systems) […]

  7. Notes on artificial intelligence, December 2017 – Cloud Data Architect on December 13th, 2017 1:21 am

    […] AI based on knowledge representation usually doesn’t. (Examples: IBM Watson, 1980s expert systems) […]

  8. Artifical Intelligence on May 31st, 2024 10:13 am

    Hi there, I wish for to subscribe for this blog to take latest
    updates, so where can i do it please help.

Leave a Reply




Feed including blog about software history Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.