Saturday, April 11, 2015

David McGoveran Interview



DBDebunk readers should know of David McGoveran (see his bibliography under FUNDAMENTALS), whose work on relational theory and practice has appeared or been discussed on the old site and here over the years. On more than one occasion I mentioned the Principle of Orthogonal Design (POOD) identified by David, who had published several years ago work he did on the subject with Chris Date. The POOD has relevance to updating relations and particularly views and led to Date's VIEW UPDATING AND RELATIONAL THEORY book .

I recently mentioned that David's and Date's understandings on POOD have diverged since their joint effort--currently Date and Darwen reject the POOD as formulated then and David has problems with Date's understanding of it and with their THE THIRD MANIFESTO (TTM) book.

David is working on a book tentatively titled LOGIC FOR SERIOUS DATABASE FOLKS where he will detail his views on RDM in general and POOD and view updating in particular, but in the meantime I asked him to publish an early draft of a chapter on the latter subject, which he did-- Can All Relations Be Updated?--and which he has just revised.

He has asked me to post a clarification on the nature of the differences with Date and Darwen (see next) and I used the opportunity to interview him about his impressive career, which covers much more than database management. David provided written answers to questions.



My Position on TTM 

by David McGoveran

Of all the research on the RM, I can think of no work I would more firmly require a professional or student seriously interested in RM (or database in general) to read than TTM. Inasmuch as Chris Date involved me early on as the first bits of TTM were being developed, I even feel a small degree of intellectual ownership and certainly influence.

Over the next year or so, I will be finally publishing my concerns about TTM and other writings by Chris Date (and sometimes Hugh Darwen), usually in LOGIC FOR SERIOUS DATABASE FOLKS.


I want to make it clear that it is not my intent to demean or detract from TTM. My disagreements with TTM are very specific, few in number, and highly technical, addressing issues that TTM ignores. The flaws they address, however, have unintended and exceedingly grave consequences.

It is and always has been my sole desire to help make TTM "bullet proof" and to provide constructive criticism. The bits I consider flaws are, I firmly believe, correctable in a way that does no damage to TTM and only improves its power and elegance. Sadly, I don't think Chris and Hugh understand my position. Their understanding of my own research is, unfortunately, flawed and has lead to misquotes, misstatements, and dead ends. I hope to correct that circumstance before it is too late for me to do so.

I consider Chris and Hugh to be very good friends and colleagues, despite technical disagreements or disagreements on proper procedure in handling contributions or correspondence. Chris, in particular, has been a helpful supporter and absorber of much of my research. I have nothing but respect for both of them and for their expertise in computing language development,
among other obvious skills, knowledge, and intellect. Chris' explanatory skills are almost unsurpassed -- though tempered by his firm exclusion of his audience's persistent attempts to reduce the subject matter to commercial products <grin>.

Interview with David McGoveran

by Fabian Pascal

I've known and worked with David since the 1980's. He has been consulting on relational databases and applications since early 1983, with clients like AT&T, Blue Cross, Digital Equipment, Goldman Sachs, HP, IBM, Microsoft, MCI-Worldcom, Oracle and many others. He authored several articles and two books with C. J. Date. In fact, Date's book VIEW UPDATING AND RELATIONAL THEORY is largely about David's research. In the Forward to that book, Hugh Darwen acknowledges David's influence on THE THIRD MANIFESTO (TTM). David has been a member of the IEEE (since 1978) and is a lifetime member of both the ACM (1983) and AMS (1996).

FP: David, thank you for doing this. You've provided me with a copy of your CV and it has prompted a number of questions. I think the answers will be interesting to the readers of dbdebunker.

DM: My pleasure, Fabian.

FP: Can you begin by telling us a little about your early life.

DM: I was born in 1952 about 40 miles east of San Francisco, California, the youngest of four siblings. We lived in a very small town on the San Joaquin River delta. Mine was a poor, blue collar family, but proud of being self-sufficient. My father worked in asbestos (which killed him) and my mother as a homemaker who often took on paying work to help out. Neither had a high school education, but both were intelligent, self-educated people. My father was gifted with mechanical things – it seemed he could repair anything mechanical - and math. He was also interested in geology, with the unintended consequence that it spurred my interest in science and math. I always wanted to know how things worked and what they were made of, so my interests transitioned from studying rocks to chemistry to physics and then finally to mathematics. I was the first in my family to go to college and that almost didn't happen. I was identified as gifted very early on as a child, and have been getting less intelligent with every passing year.

FP: You say you almost didn't go to college. Why is that?

DM: Well, the main problem was tuition and living expenses, which my family certainly could not afford. Up until the year I graduated high school, California had a free university system. That changed rather suddenly and so going to the University of California became impossible. I should have received a National Merit Scholarship, but my application wasn't submitted by my high school counselor who believed that "people like you don't go to college" – by which he meant poor blue collar people. My application to several prestigious universities was accepted but I hadn't known I needed to apply for financial aid until it was too late. So, I ended up going to a junior college for three years and working at full time jobs the entire time. It was there that I began the formal study of both logic and linguistics. I satisfied the associate degree requirements in mathematics.

FP: I see on your CV that you attended the University of Chicago and Stanford University. Can you tell me about that?

DM: I eventually entered a joint program in mathematics and physics at the University of Chicago (UChicago) on a combination of scholarship, loan, and work-assistance. I had already completed most of my general education requirements by then and had placed out of a lot of advanced mathematics by examination. The result was that I was able to take graduate level courses in my junior and senior years, especially in logic, cognition and communication. In fact, I satisfied virtually all the requirements for a degree in theoretical linguistics which I decided not to accept. Even with those advantages, finances became tight. My father became very ill and was forced into early retirement. Those and other problems caused me to drop the double major in my senior year and focus on the degree in physics. I then left the university very late in my senior year.

FP: You say you studied logic while at junior college. Tell me about that.

DM: When I was about fourteen, I had read Alfred Korzybski's Science and Sanity, which introduced me to the subjects of logic, semantics, linguistics, and general relativity. When I was at the junior college, advanced projects studies were created for me in logic and in linguistics. In logic, I worked on translating one of Plato's works into first order predicate logic in order to analyze the coherence of some of the arguments. In linguistics, I worked on trying to represent the both the surface grammar of English and some kind of generative grammar in a formal language. Both my logic and linguistics professors (Robert Neustadt and William Miller, respectively) were very supportive and gave me introductions to professors at UChicago. In fact, I would eventually – around 1981 - develop an English grammar recognition and generation computer system with William Miller called Sentax. The system simulated a hierarchical, non-deterministic state machine.

FP: Did you continue to study logic at the university?

DM: Yes. Shortly after I had arrived at UChicago I was introduced to the work of Birkhoff and von Neumann on quantum logic and was fascinated. James McCawley (theoretical linguistics - Chomsky's fourth graduate student) and David McNeill (psycholinguistics and semiotics) were among my professors. Jim McCawley offered a two semester graduate seminar on "linguistic logic" – now called computational semantics – and was writing his book "Everything Linguists Always Wanted to Know About Logic (But Were Afraid to Ask)". The courses were a great survey of different systems of logic and their relevance to natural language. With Jim's encouragement, I wrote a paper on why the surface grammar of natural language had to be a kind of quantum logic. Under McNeill's tutelage I developed the notion of a "quantum of meaning" and then applied quantum logic to the generation and recognition of natural languages and wrote a paper on that research. This work would later influence my work on database theory, especially design.

FP: You said you were employed while attending college. What kind of employment did you have?

DM: I did a lot of things while attending junior college. After I went to UChicago, I worked first at LASR - the Laboratory for Astrophysics and Space Research (part of the Enrico Fermi Institute, now a national laboratory) - doing data analysis, radiation detector calibration, and radiation characterization for cosmic ray research under Dr. Peter Meyer. LASR was an amazing place to work. For example, Chandrasekhar, the famous black hole astrophysicist and a colleague of Stephen Hawking's, was just down the hall from me. I actually worked in "the pit" where the first nuclear reactor was built. That job was my introduction to complex data analysis by computer.

FP: What else did you do?

DM: I spent one summer working under Dr. Chuck Savage at Dow Chemical's Western Applied Science and Technology Laboratories in Walnut Creek, California. My job was data collection and data analysis pertaining to packaging agents for kidney dialysis filters. In my last year and a half at UChicago, I worked as the supervisor of medical electronics in the clinical laboratories at the UChicago Hospitals and Clinics under the Director, Dr. Rochman. I was on call 24/7 and sometimes road my bike through pretty rough south Chicago, Hyde Park neighborhoods in the middle of the night to go in and repair equipment.

FP: Did you do any other jobs?

DM: When I left UChicago, the employment situation in the sciences was pretty bleak. After some time in California, I went to Baltimore, Maryland and found temporary work assembling a cancer research laboratory at Johns Hopkins University. When that ended, I returned to California and got a job as a physics associate at Stanford Research Institute in the fall of 1976. There I joined the team developing field ionization mass spectroscopy under Dr. Michael Anbar.

FP: Isn't 1976 when you started Alternative Technologies?

DM: More or less. Stanford Research Institute, which soon became known as SRI International, eventually transitioned from employer to client of Alternative Technologies. Eventually I had the opportunity to work on several projects there, including the development of the encoding of photographic negatives, toners for copiers, breast x-ray tomography, silicon wafer dislocation detection, and solid state UV fire detection. I also did some classified research work during that time. Throughout, my work involved sophisticated data analysis.

FP: How did Alternative Technologies come into being?

DM: Starting in junior high school, I began doing research that had potential commercial applications. My science and math teachers explained to me that I had to protect intellectual property. With that in mind, I had always planned to create some sort of company to develop and market my independent work. About the time I joined SRI, I had filed my first patent application and officially founded Alternative Technologies.

FP: Junior high school is pretty early to do research. What kind of research were you doing?

DM: The first project was on a quantum mechanical process to explain catalytic reactions, which were poorly understood back then. I had read several texts on quantum mechanics when I was about thirteen or so including Gerhard Herzberg's Atomic Spectra and Atomic Structure, Max Born's Atomic Physics, and Linus Pauling's Nature of the Chemical Bond. The last made a big impression on me and suggested an explanation of catalysis to me that I went on to investigate. Then, while in high school, I developed a programmable slide rule. I couldn't afford to apply for a patent, so I obtained a scientific copyright on it with the help of my math and physics teachers. Unfortunately, that one was just before Texas Instruments announced the programmable calculator, which removed any hope of commercialization.

FP: I see that you published papers related to logic while at SRI. Tell me about that.

DM: While at UChicago, I became acquainted with a number of researchers in various fields including physicists John Wheeler and David Bohm, and neuroscientist Karl Pribram. Karl, a professor at Stanford, had developed a theory that memory was distributed and holographic. Karl and I met on occasion after I moved back to California. About that time, Karl met a physicist by the name of Eddie Oshins who had an interest in quantum logic and was applying it to an understanding of schizophrenia. Karl put the two of us in touch and we wrote some papers together on that and related subjects. Shortly before the first of those papers, I wrote and published a papers on limitations to the application of fuzzy logic.

FP: You've said that you were mentored by H. Dean Brown and Dr. Cuthbert Hurd. How did that happen?

DM: While at SRI, I was fortunate to get to know some pretty impressive people. Among those was Hew Crane, who had worked at IBM early on and then under von Neumann at Princeton's Institute for Advanced Studies. He and Doug Engelbart (inventor of the computer mouse) were colleagues, and he was the inventor of the eye tracker. Anyway, Hew and I co-sponsored a series of multi-disciplinary seminars at SRI. Among the attendees were Dr. H. Dean Brown, SRI's Willis Harman, and Stanford professors Karl Pribram and H. Pierre Noyes. At that time, Dean Brown was partner with Dr. Cuthbert Hurd in a consulting firm called Picodyne. He invited me to meet with him and Dr. Hurd, and ultimately they took me under their wings and taught me the business of software consulting. Dean and I went up and down Silicon Valley implementing turnkey business computer systems using repurposed Zilog development systems.

FP: You say that Alternative Technologies specialized in the software architecture of distributed systems. What was the first such system you designed?

DM: In 1977-78 I was asked to investigate ways to apply computer technology to facilitate a psychology and global social issues conference at the University of Toronto. To that end, I ultimately designed a conferencing system that would allow participants to quickly report on their impressions of a topic, consolidate that information and provide it as feedback before the end of the day. It was a kind of early interactive wiki with some text processing capabilities. I convinced Digital Equipment Corporation to supply the computers, which unfortunately arrived too late to be used. We had to implement a manual fall-back system. That experience, along with the influence of Dean Brown and Dr. Hurd, really focused Alternative Technologies on the design and development of distributed software systems.

FP: Dr. Brown and Dr. Hurd were pretty famous weren't they?

DM: Yes. Dean Brown was one of the founders of Zilog and a pioneer in educational software. He was a physicist who had been at the Institute for Advanced Studies, where he and Einstein used to play the game of Go at lunch time. Dr. Hurd was a mathematician who was in many ways responsible for establishing computing at IBM back in the late 1940s and 1950s. Among his employees was another mathematician named E. F. Codd. I strongly urge anyone who doesn't know about Dr. Hurd's influence on the history of computing to do a little research. Dr. Hurd's and Dean Brown's mentorship changed my life and career.

FP:  How did you end up at Stanford University?

DM: A technicality had prevented me from getting credit for a mandatory course at UChicago after I left there in 1976. So, with Pierre Noyes help, I quickly became a non-matriculating student at Stanford, finished the required course work and attended graduate courses.

FP: What sort of graduate courses?

DM: I took graduate seminar and project courses in cognition like Karl Pribram's neuropsychology course. Another student in that class was "Penny" Patterson, the trainer of Koko, the gorilla that learned sign language. Other courses were in philosophy of science like Pat Suppes' course on logic and foundations of physics. Suppes was another person who had a significant influence on my later work.

FP: Even after being a student there, you've had a relationship with Stanford for many years haven't you?

DM: Yes. Pierre Noyes, a theoretical physicist at Stanford Linear Accelerator (SLAC) National Laboratory, invited me to join an international research group at Cambridge University, England called ANPA, which stands for Alternative Natural Philosophy Association. (In the days of Isaac Newton, a physicist was called a natural philosopher.) Eventually I served as newsletter co-editor and secretary-treasurer, and in 1984 co-founded a chapter of the organization at Stanford University called ANPA West. It was through Pierre and that group that I had the opportunity to learn about the combinatorial hierarchy and make contributions to that work in discrete models of physics and on the relevance of paradoxes, decidability, the halting problem and so on.  As a result, Pierre invited me to become a Visiting Scholar in the Theory Group at SLAC (1986-1992) and, many years later, a Visiting Industry Associate (2010-2012).

FP: Why didn't you go ahead and obtain an advanced degree?

DM: Unfortunately, my personal life – especially the health of family members – meant that I had to focus on earning a living. To make matters worse, three separate personal financial disasters struck between 1980 and 1996.

FP: Can you explain those disasters a bit more?

DM: In 1982-1983, my home was almost destroyed by extremely heavy rain and mudslides. The repairs completely consumed all my savings and caused some debt that took years to pay off. In 1989, my home was again seriously damaged in the Loma Prieta earthquake. To make matters worse, in early 1989, a client named Pacific Telesis took advantage of the death of my father to breach a consulting contract and to reverse engineer a trade secret software product I had spent years developing and bringing to market. It took every penny I had to pay my consultants and pursue the client legally. Ultimately, I won the lawsuit but lost the war. The loss of trade secret meant I couldn't obtain venture capital and the loss of resources meant I had to spend all my time pursuing income rather than on development and marketing.

FP: What was the product and what did it do?

DM: It was a middleware product called RAM – that stands for Relational Access Manager. It replaced the API of "relational" DBMS products like Britton Lee's database machine and Oracle, making it possible to write an application once and deploy one whichever DBMS you wanted. In addition, it cleanly separated the host language from the query language, letting you change data representations in one without changing them in the other. On the host language side it supported like FORTRAN, COBOL, C, and C++, and on the query language side it supported dialects of Quel and SQL. The developer mapped a program's internal data structures such as linked lists, trees, and stacks to and from a relational database. It also had features like "object to/from relational" capabilities, exception handling, error management, DBMS location independence, and so on. I even had design plans for a degree of fault tolerance. At the time I first developed it, there were no interoperability standards and the SQL standard was just beginning. There was nothing else like RAM on the market.

FP: Some sometime between 1980 and 1986, you must have become acquainted with the relational data model. How did that happen?

DM: It was almost a perfect storm. First, my interest in computing, mathematics and logic led me to read Codd's early papers in the late 1970s. Second, from about 1977-1981, my wife was secretary to Prof. Ed Feigenbaum, Chairman of Computer Science at Stanford and the father of expert systems. There I met Terry Winograd and database people like Jeff Ullman, Hector Garcia-Molina, and Geo Wiederhold, and became more familiar with their work. Third, in 1980, I took on a "side job" as Chairman of the Computer Science and Business Management Departments at a small private college (Condie College). While there I taught about data structures, file organizations, and database management systems. Fourth, in 1983, I designed and developed a computer integrated manufacturing system for use in the semiconductor industry called Fasttrack. It was distributed and needed a DBMS. I selected a very early relational DBMS implemented in hardware, the Britton Lee Intelligent Database Machine (IDM).

FP: What did Fasttrack do and how did it use a relational DBMS?

DM: Fasttrack was designed to implement real-time shop floor scheduling, quality management and analytics, process control, workflow, and equipment monitoring and control. Under the covers, it used some pretty sophisticated results from operations research implemented in an expert systems-like approach. For example, engineers could capture rules for detecting and responding to quality control problems, and the system used rules for dynamic routing, scheduling, and anticipatory equipment maintenance. It completely replaced the batch oriented MRP systems then available. For sophisticated analytics, I chose to integrate a statistical analytics and multi-dimensional modeling package from Bolt, Beranek and Newman (BBN) called RS/1. All that functionality represents a lot of computational power and a lot of data management. Given the capabilities of systems then available, that implied a hierarchy of distributed systems was required. Even more challenging, certain business requirements meant the database had to be modifiable during deployment and even operation. Taken together, the only hope was a relational database.

FP: Why did you select a database machine?

DM: Based on the requirements and my research, there was no way a software implementation would be fast enough. I needed a more scalable solution. Britton Lee offered a solution that enabled me to partition the database and federate the design, using more hardware as needed to handle load.  It also had some unique features, like parameterized and compiled "database commands." Just to be clear, the IDM's database commands can be thought of as relational calculus macros. They were transactional but not procedural.

FP: What was the next project you did that involved relational?

DM: In 1984, Citibank CEO John Reed contacted Dave Britton of Britton Lee. Reed asked him to recommend someone to help troubleshoot his pet project, an international funds transfer system which they were designing to use the Britton Lee database machine. Dave Britton recommended me and off I went to New Jersey. As a result of that work, I ended up having clients like Goldman Sachs, Bear Stearns, and Drexel Burnham Lambert on Wall Street for the next seven or eight years, helping with the design and development of their trading systems and databases. In fact, almost all my commercial clients over the next ten to twenty years were building "bleeding edge" software systems, often incorporating relational databases.

FP: What did you do next?

DM: As a result of my relationships with Britton Lee and BBN, I ended up developing a commercial interface between RS/1 and the Britton Lee IDM. That relationship led to learning about and doing a project on BBN's massively parallel processing (MPP) machine, the Butterfly. It served as a great introduction to MPP. Years later I would end up doing projects on other MPP systems such as Oracle nCube, Sybase's Navigation Server, and Informix XP.

FP: At some point, did you become acquainted with Dr. Codd?

DM: In 1983 Codd gave a talk to Britton Lee. After the talk, I approached him, introduced myself, and asked him a very technical question in logic. He spent about fifteen or twenty minutes talking to me about it.

FP: What was the question?

DM: I asked him if the reason he isolated the relational algebra data sublanguage from the computationally complete host language was so that the query language could be declarative. A computationally complete language is not decidable, which means that there isn't a reliable single algorithm for evaluating arbitrary queries. You have to hand code and test an evaluation program for each and every query.

FP: And what did he say?

DM: He confirmed my understanding of the technical problem and stated that it was indeed his motivation. I walked away with the impression that Codd had strong understanding of logic, and that his approach to the theoretical issues involved in the relational model were neither arbitrary nor ill-informed.

FP: Did your relationship with Dr. Codd develop further?

DM: Eventually it did. While on Wall Street, I helped bring Codd & Date into Goldman Sachs and got to know Sharon Codd a bit. During the period 1984-1988 I had a few opportunities to go to Codd & Date offices, and had a few conversations with Ted. The Pacific Telesis (PacBell) project to implement the first digital telephony (ISDN) provisioning system and database design was a joint effort initially with Codd & Date's Colin White providing the relational training to PacBell and Alternative Technologies providing the design and development as it pertained to the database.  At some point around 1990 I arranged for Dr. Codd and Chris Date to join me for a day of consulting for a start up that was developing one of the first so-called "columnar" implementations of the relational model. Actually, it was what I called "domain oriented." But it wasn't until I was on the conference lecture circuit, and after I had a relationship with Chris Date that I got to know Ted a little better. He was never more than what I would call a professional, though friendly, acquaintance.

FP: Did your relationship with Colin White develop further?

DM: Yes. During the period immediately after the PacBell disaster, Colin White and I began to work together. Colin was proprietor of C. J. White Consulting, and editor/publisher of InfoDB, and new to consulting. He had just started to develop a series of in-depth analyses of DBMS products with the intent of publishing and selling them. As I was more familiar with so-called minicomputer and Unix based systems, he asked me to join with him in authoring the reports. He also asked me to become an Associate Editor of InfoDB. Then, in 1990, we co-founded a new organization to be called Database Associates. The organization would act as a marketing umbrella for Colin White, Rich Finkelstein, Paul Winsberg and myself. Database Associates was, if anything, too successful and Colin eventually wanted it to replace our individual companies. Rich Finkelstein and myself, both of whom had long standing companies, declined and left Database Associates.

FP: Colin introduced you to Chris Date, right?

DM: Yes. Colin had co-authored A Guide to DB2 with Chris. I had the idea of doing the same kind of thing with Chris, except the book would be A Guide to Sybase and SQL Server. I had a long standing consulting relationship with Sybase, had done the first port of the SQL Server client to the PC, and had argued forcefully with Sybase's Bob Epstein for the porting of SQL Server to the PC. As a result of that relationship, I knew the product's internals very well. At my request, Colin set up a meeting with Chris and I made my pitch, to which Chris agreed.

FP: How did your relationship with Chris evolve?

DM: As we worked together, I think Chris came to understand some of my strengths, especially product knowledge, real world database and application design and development, and – on the theoretical side – logic and certain aspects of mathematics. I started being one of his "go to" guys for these subjects. The result was that we not only collaborated on A Guide to Sybase and SQL Server, but I became a reviewer and sounding board for him. We then collaborated on other work, especially database design and view updating.

FP: You also worked with Date on POOD – the Principle of Orthogonal Design. Tell me about that.

DM: After years of work with large scale, distributed and commercial relational applications, I became (and still am) very interested in making database design more of a science than an art. When Chris contacted me about certain anomalies that resulted when he tried to apply his view updating rules. I realized that a certain set of three design principles I created and followed in practice were not known to him, and that they explained the anomalies he was seeing. It was that discussion in late 1993 that introduced the concept of a formal, DBMS "understandable" or computable, relation predicate. Chris helped me sharpen my thinking about one of the design principles, and it came to be called POOD. We then collaborated on the paper that introduced it. POOD and its two sibling principles were based on work I had done in semantics and data modeling. That work was applied to developing a technique for reversibly merging databases with distinct schemas, an extension of a problem known in the literature as schema migration. I had solved that problem for specific applications on Wall Street and found the principles served me and my clients well over the years.

FP: What other database design problems have you tackled?

DM: The most important is the so-called missing information problem. I consider "missing information" to be a particularly bad misnomer. It is an umbrella term for several poorly analyzed problems, including operators that produce nulls, bad modeling of and support for types and subtypes, and confusing data and metadata. The series of papers Nothing from Nothing set out to explain my approach to missing information, but it appears that it was not well understood. So I guess I've some more writing to do on the subject.

FP: In looking at your CV, I see something called the "Ordering Operator Calculus." What is that?

DM: Back in the 1970s I became interested in solutions to a number of problems that involve complex discrete systems. The more I looked, the more such systems I found. Examples include everything from complex computing systems to discrete manufacturing to quantum mechanics. While discrete mathematics certainly exists and is useful in studying these systems, in reality the foundations of traditional discrete mathematics borrows from or leans heavily on that of continuous mathematics. So I set out to find or develop a system of mathematics that was purely discrete. The more I studied the problem, the more I became convinced that systems normally studied with continuum mathematics – from linguistics to classical mechanics to general relativity and quantum mechanics – should be discrete. Early on, I found evidence that meaning itself, and not just information, is best understood as being discrete or quantized. Additionally, I found that a crucial concept that needed to be captured in a more useful way was ordering. I published the first abstract paper on the ordering operator calculus in 1982 and its initial applications to physics over the period 1984-1989. My work on view updating and adaptive transaction management has roots in the ordering operator calculus, but I've not published these details. Some of the work on database design – especially the Principle of Orthogonal Design and work on missing information – was also related to the ordering operator calculus and its relationship to formal systems of logic.

FP:  Speaking of the transaction management model you call "adaptive transaction management," can you provide a brief explanation of that work?

DM: My work with OLTP began about 1981. I was motivated my experiences with the complexity and processing cost of guaranteeing consistency in distributed OLTP to find an alternative solution. Adaptive transaction management turns the traditional concept of transaction on its head. Instead of forcing transactions in a distributed system to satisfy certain constraints in a rigid manner, you identify the constraints the transaction in fact satisfies and then make certain that the only data permitted to be combined is that produced by mutually consistent transactions. This permits some very interesting transaction processing and distributed database optimizations without throwing away the ACID requirements altogether. Among other possibilities, certain classes of concurrent transactions can share intermediate states in very controllable ways and without causing any loss of consistency.

FP: Has adaptive transaction management been implemented?

DM: Yes and no. It's been implemented in dedicated applications, but the kind of general mechanism described in my adaptive transaction management patent has yet to be implemented.

FP: What was your involvement with business process management systems or BPMS?

DM: Throughout the early 1990s I consulted on numerous data integration and application integration projects. Those projects were all plagued by process issues that were either overlooked or handled manually, which involve much more than just the transition constraints on a database. I began to realize that data management and process management had – or should have – a lot in common. Business processes were changing more and more rapidly. We needed the ability to separate a logical or business process model from the technology of its physical implementation, managed by a "process engine." And this should be driven from the design of business process models. In addition, we would need process measurement and analytics in this proposed process management system to achieve a degree of real-time closed loop control.  So I introduced a set of requirements and a canonical architecture for what ultimately became known as business process management systems. The conceptual architecture of a BPMS is actually very similar to that of the Fasttrack system I designed back in the early 1980s.

FP: Is this the work you did with HP and IBM on business process?

DM: Yes. In 1997 I worked with HP to help them repurpose an overly sophisticated workflow management engine and change it into one of the first BPMS products. To promote the concept and create a new category, several things were done. In 1999 I was approached by Bob Thomas, with Tony Brown as editor and myself as technical editor, we started the eAI Journal which later became the Business Integration Journal. At the same time, I was approached by DCI to chair a new conference series on application integration. Using these opportunities, I wrote and published some of the first articles defining the subject of BPMS, and chaired conferences at which business process management was highlighted as one of three ways to achieve integration. We also founded a 60 member industry council called the Enterprise Integration Council. Starting in 2000, IBM contacted me to work with them, this time repurposing their Websphere MQ Workflow product and technology, later renamed Websphere Process Manager. I helped both HP and IBM push Gartner and other analyst groups to track a new BPM category, and ended up working with companies like Vitria, Candle, Fuego, Savvion and others to help shape both the BPM market and category.

FP: What are you working on now?

DM: Well, I'm finally getting a chance to do some of the writing I've had to put on hold for many years. It's been very frustrating over the last twenty – thirty years to introduce concepts in database theory, only to have to depend on other people to write about and try to explain them. And it's been frustrating for my colleagues as well. Over the next couple of years, I expect to finally produce a book on logic for database, and publish detailed papers on my research and approaches to database design, view updating, and missing information. My writing is a little rusty, but I'm hoping they will have a positive influence on TTM. And I will continue to work on discrete foundations of physics and mathematics in the background. Having worked most of my career on large scale distributed relational database applications, including Web and mobile, I'm still interested in helping turn database design into a science and transforming relational theory and practice. There is a lot of work to do helping people understand how and when relational theory pertains to hot topics like analytics, NoSQL, and so-called "big data."

FP: Thanks, David and I look forward to your writings. You have a longstanding invitation to contribute to dbdebunk.





No comments:

Post a Comment

View My Stats