bhaskar's blog

PIP - coming soon

FIS PIP on GT.M on Linux on x86 hardware is a complete FOSS stack with superb transaction processing throughput and unique functionality for extreme levels of business continuity. PIP will soon be released as free / open source software (GPL v3).

PIP is the infrastructure on which FIS Profile's financial applications are built. Till now, the infrastructure has been joined at the hip with Profile's financial applications and has not been separately available. It is now being separated from the financial applications, and will soon be available as FOSS. PIP includes a SQL engine, PSL (Profile Scripting Language, a lightweight object oriented scripting language), a JDBC driver, and two IDEs - one built on Eclipse and another browser based and driven by Tomcat on the server. Applications built with PIP can use either FIS GT.M, or Oracle as the database engine, and in the future will be able to use other engines as well. Under the covers, PIP compiles PSL and SQL into M in two flavors - M code accessing an M database and M code calling the API of a relational database.

A core processing system is the legal system of record for a bank, and its single most mission critical application. Built on PIP, Profile runs the largest real time core processing system that is live at any bank anywhere in the world that we know of, and has recently been successfully benchmarked on an x86_64 Linux platform on a database three times the size of that largest live system. GT.M is the database engine on which Profile is most widely deployed around the world as the legal system of record for many tens of millions of accounts.

PIP will be released on February 10, 2008, at the Southern California Linux Expo (http://www.socallinuxexpo.org/scale6x/conference-info/speakers/Bhaskar/) and will be downloadable from Source Forge (http://sourceforge.net/projects/pip).

-- Bhaskar

Software Systems Compared to Cities

[Earlier versions of this essay have previously appeared online.]

Why should a system that is working well need to be replaced? Successful software systems are like cities. There are cities in Europe that have been continuously inhabited for thousands of years. London or Rome today would be unrecognizable to Julius Caesar, yet the old cities were never abandoned and replaced - they were just continuously re-developed over the centuries. Although we like to discuss replacing major software systems because their quirks annoy us, perhaps we should think of these as limitations that must be lived with, just as in these days of the automobile, we still deal with streets in Boston that were engineered for horses and wagons, but we have no intention of razing and rebuilding downtown.

The prospect of "big-bang" conversions of large mission critical software systems gives CIOs ulcers just the way that the prospect of razing and rebuilding downtowns gives city fathers ulcers. [The exceptions are when cities are destroyed by war or natural disasters.]

In the not too distant future, a school of thought will likely develop to the effect that large software systems will not only not be replaced, but also that we should not plan to replace them. Just as we may tear down buildings and build highways, perhaps we should not think in terms of replacing large software systems, but in terms of a process of continuous modernization and renewal (with the software equivalent of urban decay resulting if money is not spent on upkeep when it is needed).

Instead of asking what the life expectancy is of a mission critical software system, perhaps the first lesson is that it is more meaningful to ask what it takes to keep it healthy and contemporary on an ongoing basis.

Cities evolve. With a few notable exceptions of seats of Government, cities are not greenfield creations. They start small, and grow, and what they are good at changes over time.

So the second lesson for software is probably that mega projects that try to do everything are almost certainly doomed from the start. The history of large software projects is not encouraging, and the growing popularity of agile methods speaks to the higher success rate of evolving software in a series of many small steps versus giant seven-league strides.

Rome was not built in a day.

A third lesson from the analogy is that, since large applications evolve, they will always have aspects that are obsolete and awkward. If we love them, we must love them warts and all.

So, while this brave new always-on networked world does not obsolete our software systems, it does give them an opportunity to evolve or stagnate into obsolescence.

Open standards

Software monoliths are no longer in vogue. Although an application may well be monolithic when it is first built, over time, no application is an island, and even the most complete, most monolithic application will sooner or later need to be part of a large software ecosystem.

Any large software system consists of many parts that need to work together. For the parts to work together, the interfaces can be ad hoc (or point to point), or they can conform to standards. While there is a place for ad hoc interfaces in specialized situations, standards make the world run. It has been argued that the industrial revolution did not take off when the steam engine was invented, but rather when standards emerged for mundane things like threads so that we can go to a hardware store and purchase screws confident that they will fit when we go home.

There are two types of standards: proprietary standards and open standards.

Proprietary standards and the applications built around them are like walled gardens. They can be functional. They can be elegant. They can look beautiful. They can work well. But ultimately, proprietary standards are not owned by the community of those who use them. They are standards imposed by those who own them and as profit maximizing entities, once usage of a standard starts to proliferate, the organization owning it can charge monopolistic rents to those who use the standard. For those who are hooked, a proprietary standard can be restrictive, addictive, expensive, and difficult to break away from.

In contrast, open standards are built by a broad consensus of those who use them and those who provide products and services; by technical experts and those who hope to make a profit. They can be messy. The process of standardization can be protracted. But they work. They can be as restrictive and difficult to break away from as proprietary standards. They can be expensive, but they cannot be overpriced because as long as there are no barriers to competitive entry, prices are market prices rather than monopolistic rents. A good example of an open standard is TCP/IP. That a major vendor of a proprietary standard recently attempted to hijack an open standard setting process shows how much the market values open standards.

Standards can be formal, like the ISO standard Open Document Format, or they can be informal and cultural - such as shaking hands with the right hand (unless you are a boy scout in uniform).

GT.M is an open architecture implementation of MUMPS that based on the philosophy of excelling at what it does, and with an open architecture that leverages the underlying operating system environment for capabilities that the environment provides well using formal and cultural standards.

Syndicate content