There are many instances, both in nature and business, of the virtues of distributed systems as compared to monolithic systems.
The evolution from centralised to decentralised systems in computing has hurt some of the giants of yesteryear, such as IBM, Microsoft, and Oracle, and enriched newcomers such as Google, Amazon and Facebook.
There are many instances, both in nature and business, of the virtues of distributed systems as compared to monolithic systems. One of the most obvious is the rise of open-source software, as argued persuasively by Eric Raymond in The Cathedral and the Bazaar (available online).
He argues that “cathedrals” (hierarchical, well-organised companies which are the western norm, e.g., IBM, Microsoft) will in the long run be defeated by “bazaars” (loosely federated groups of workers).
In the context of operating systems (the software that controls devices), and specifically of the UNIX and Linux systems (which is what Eric was focusing on), this prophecy has largely come true. Microsoft, so dominant in the last century, has now lost its monopoly.
If you consider smartphones and tablets as well, the newcomer operating system Android (from Google) now accounts for as much as 60 percent of the world’s devices, with Apple’s iOS running another 10 per cent. And both are derived from UNIX and Linux; Microsoft Windows’ share, once 90 percent, is around 30 percent.
A major difference is that UNIX/ Linux are, fundamentally, distributed systems. The original design of UNIX was radically different from that of operating systems that existed before it, such as the monolithic IBM System/360. UNIX (and its rewrite Linux) depends on many small pieces of code, each of which does one thing well, and which are loosely connected (technically through “pipes”).
This structure means that arbitrarily complex systems can be built independently by distributed teams: each team has a “contract” with the others about sharing their functionality through well-defined interfaces. Furthermore, by putting the (reusable) pieces together in different ways, you are able to create different configurations: say, one optimised for number crunching, another for video editing, yet another for natural language processing.
The rise of these systems was an invention, then, of the greatest relevance, but the story of how they were thwarted in their rise is an object lesson in the transition from invention to innovation. It is not obvious that if you build a better mousetrap, they will come and buy it from you. Potentially radical inventions do not necessarily become disruptive innovations unless you work on it.
You have to actively market your invention, and moreover, you need to have friends, an ecosystem that builds co-specialised assets along with you before you are able to appropriate the value in your invention and turn it into an innovation, that is to say, something that has succeeded in the marketplace.
But that is a story for another time; let us look at some other instances of distribution. The Economist magazine recently noted a curious fact, that both Linux and Amazon Web Services were born on the same day,25 August, the former 25 years ago, and the latter ten years ago. They are closely linked, in that it would not have been possible to create and support AWS, had open-source software such as Linux not existed. However, Linux, as The Economist noted drily, will continue to plod along, while AWS is a $10 billion business, and very profitable.
What have AWS, and other cloud services such as Microsoft’s Azure, and Google’s own offering, done? Cloud computing means you relinquish control over your computing requirements (that is, you treat it as a utility that arrives as per your demand, instead of owning a large computer centre). The “cloud” is not a single large monolithic data centre with arrays and arrays of computers, but instead, it is a constellation of such data centres all over the world.
You, as a user, view computing as an on-demand service, like electricity. You only pay for what you consume: computing power (CPU), storage or other resources. If you are a small company, you start off by consuming a small amount, and as you grow, you keep adding more: thus, capital costs for computing need no longer be a barrier to entry. Of course, in the rental model, you will probably pay more in the long run, but then it is an operating expense, not capital cost, and you get preferential tax treatment. Therefore, from the point of view of the user, distributed computing is efficient and cost-effective. And you don’t have to employ armies of system administrators, nor worry too much about the latest upgrades to hardware and software: someone else takes care of it.
The associated software-as-a-service (SaaS) model is also helpful: instead of buying expensive software, you simply rent it. The provider will take care of the chores of keeping it up to date and fixing bugs centrally.
Of course, there are losers in this transition, too: of particular note are IT incumbents, including service providers, who have found demand for their services diminishing; hardware providers, who find their standard offerings undercut by cheap commodity products from no-name makers; and enterprise software providers, whose products are in competition with free/ open-source products.
Thus, the evolution from centralised to decentralised systems in computing has hurt some of the giants of yesteryear, such as IBM, Microsoft, and Oracle, and enriched newcomers such as Google, Amazon and Facebook. Doubtless, these worthies will also be, in turn, shouldered aside by others with good ideas.
Now comes yet another field where the monolith is being replaced by the distributed: space exploration. In its “Technology Quarterly” of August 27, The Economist considers some new developments that are giving the space business a second wind: after the race to the moon, and the giant International Space Station, there has been a lull for a while. Perhaps the most exciting thing that has happened recently is India’s frugal Mangalyaan mission.
But there are two new things that are making the headlines: one is the success of private companies entering the launch business and promising space tourism as well; the other is the discovery that the nearest star, Proxima Centauri, has a planetary companion that’s roughly Earth-sized and might even be capable of hosting life. There is a spring in the step of space cadets, all of a sudden.
Space exploration is in a sense the epitome of centralised projects in its complexity (although upon visiting Angkor Wat in Cambodia, it occurred to me that the management task of building that astonishing monument, without modern tools, would have approached the same level of complexity). A giant spaceship ascending to the skies in solitary splendour is evocative as is that most iconic of monoliths, the black slab from the science fiction classic 2001: A Space Odyssey.
But, despite the romance and glory associated with manned space flights, it has been distributed systems of unmanned satellites that have brought the greatest benefits to us. In some cases, they have become indispensable: for instance, the global positioning satellite systems that let us triangulate our locations so precisely. In other cases, they have been spectacular failures, though: like Motorola’s ambitious Iridium project that envisaged 77 satellites offering instantaneous global communications.
The utility of satellites is known: in fact, there is a virtual traffic jam up there (let us remember there are many military spy satellites circling us too). But what is new is the emergence of very small satellites known as “cubesats”. A conventional communication satellite may cost $100 million and may be a standard structure: a cylinder several metres long, a metre in diameter (as in the systems built by SSL of Palo Alto, California).
But cubesats are much smaller: the standard definition of a 1U cubesat is a box 10 cm x 10 cm x 11.5 cm with standard interfaces that allow them to be stacked like Lego bricks. Most of their innards are not custom electronics, but standard components scavenged from, say, smartphones. Given the rapid rate of advance of these components, cubesats are becoming quite capable, and they cost around $100,000. Most of these will end up in low-earth orbit (LEO), around 10,000 km, as compared to geostationary satellites at 32,000 km.
What is interesting about these cubesats is two-fold: they are versatile, and they can lead to a resilient network. They can map the earth below; they can listen for ships’ and planes’ beacons (remember MH370?), they can collect scientific data, depending on specific sensors on board. The loss of one or two of them does not lead to dramatic erosion in total system capability, always a plus for distributed systems, and very much unlike the loss of a big satellite.
Besides, there’s also the possibility of even smaller space probes. “Space dust”, they call it: tiny boards “3.5 cm on a side and weighing four grammes, each holding sensors, a processor, solar cells, a radio and a pair of coiled 10 cm whiskers”, says The Economist. These tiny little devices can study various phenomena and transmit the data back to earth.
Intrepid dreamers go further: by putting mirrors on them, such tiny craft could be “pushed” gently, using powerful lasers on the ground, and accelerated to speeds that are a notable fraction of the speed of light, and who knows, some of them may one day reach Proxima Centauri B, the candidate exoplanet four light years away.
Thus, from projects up in the heavens, all the way to mundane computing tasks, the principle that distributed systems are often better than a single monolithic design seems to hold true. Extrapolating, it could well be the case that mono-anything is less efficient: monoculture crops, monopolies, monoglot Anglophones, monotheistic certainties, even monosodium glutamate, that which fools your brain into thinking something is tasty.