September 11, 2001

User Friendly?

An essay on the concept of User Friendliness

The growth in the use of the Personal Computer has brought into prominence a very important concept of user-friendliness, a concept that was not very used in the eras preceding this period. In the succeeding period there has not been a term that has been more used and abused for various means and ends. Here we shall try to get a perspective of user-friendliness and ask some strange questions.

Also note that a major focus of this paper is the application of this term to the environments of Linux and Windows.

The early beginnings of the term user-friendly are shrouded in mystery. But what is known is that this word alone forms one of the biggest and the most successful mantra for the some of the biggest software companies around. So what is user-friendliness.

There's a lot of talk about making software more user friendly. Some pieces of software is user friendly, while others are not, right? Well, that seems to be the opinion that most people have. A common view seems to be that a program is user friendly if it has a graphical user interface, otherwise it is not. But this is a very simplistic and short sighted way of defining user-friendliness. All that graphical systems allows is friendliness and not user-friendliness.

What is the difference? The term "user-friendliness" also contains the word user that explicitly binds any definition of the term to another variable - the user. It means something friendly to the intended audience, and that is where the buck stops. User-friendly software is that software which does not get into the way of a user, not one that can be used without reading a single line from the manual. All user-friendliness requires is a fitness for purpose in the most unobtrusive manner possible.

It is not incumbent upon user friendly software to do the user's work for him. It is also not incumbent to provide animated paper clips that tear across your screen. User friendliness does not mean blandness of design, or lack of options.

One more additional point is to be taken care of before we accept this definition. That is the type of the audience. It is obvious that any user of the software is not one that comes into this world with all this information about that software built into him. He has a learning curve, for every piece of software he uses, just like any other thing he learned about. User friendliness of a software can be therefore further split into friendliness towards the newbie user and the seasoned user. We shall therefore define the user-friendliness aspect of the software with respect to the audience state as being "newbie friendly" "seasoned friendly".

We have in place a set of rules and guidelines, and of course a formal definition of the term user-friendliness. We shall use these tools to examine and understand some of the present concepts of user-friendliness that are prevalent in the community. We shall then proceed from that analysis to examine more important questions.

A few examples before we go on. Emacs is just another text editor. For a newbie. But any user of Emacs who has learnt the keystrokes will swear by Emacs till death. Therefore Emacs is user-friendly as it does indeed makes itself fit for the purpose and does it well, it also is more seasoned friendly. Take the MS-Paint program as another example. Almost every user has used it when he was taking his first faltering moves with the mouse. This program is very simple and intuitive to use. A simple straight forward application for a simple straight forward use. We may argue about the purpose of the program for which it is claiming fitness to, but the point is that this program is classic newbie friendly. I am aware of the bias in choosing examples but this was only for the purpose of illustration, and so no inferences are to be drawn right now.

The present concept of user-friendliness

There are a number of misconceptions about user-friendliness. I shall look at a few of the present attributes of software that are due to misapplication and ms-interpretation of the concept of user-friendliness.

User friendliness has come to denote the GUI, especially when it comes to comparisons between Windows and Linux. "Linux is less user-friendly" because it does not have a GUI like Windows. While the "fact" itself is arguable, that is not the intent here. The fact of the matter is that Linux, like its predecessors in Unix, has a rich set of small and powerful tools, that achieve a particular objective and do so the best way that they can. This tends to be a lot newbie unfriendly, because of the large number of tools available, and the variety of options that they provide. But again most of the tools that are needed to run on the commanline are those that actually do benefit from using the command line. Also these are tools that are not really meant to be used by a newbie, unless he is probably interested in the tool itself. Finally most of these tools do come in with a standard help on the commandline itself. Going by our earlier strict definition of fitness for purpose for the intended audience, all the commanline tools do indeed pass this criterion. They can be safely concluded to be user-friendly. If you find the use of some of these tools difficult or otherwise difficult, think a second they may not probably be for you. You might, have another option of doing the same thing that might be more friendly towards you.

Another common view of user-friendliness of user-friendliness has been "uniformity". Similarity of interface, and similar looking names so that users don't take time to adapt to the software. Although this concept is admirable per se, but an application of our rules makes us thing otherwise. A program is required to be fit for a particular purpose. Which means, so should the user interface. Any good interface should be intuitive for the purpose for which it has been built. And once this is done I see no reason for "uniform interfaces" and "ease of getting used to new interfaces". If you cannot make a spread sheet look like a word processor, the concept of uniformity does not exist and had not existed. In fact I will go so far as to say that all talk of uniformity of interfaces has never existed, was a product of the PR team rather than the developers, and was in fact a limitation of the GUI to be unable to provide ease of creation of new interfaces to the developers.

An aside here. The concept of similar interfaces has been abandoned by all the major software developers. The present term is "intuitive" interfaces. The reason is obvious. When a firm spend a lot of money actually developing software, it would like to make it distinctive and have top of the mind recall for its customers. The interface is the only way of doing it. Given the limitations of adhering to the user-friendliness, the companies obviously have opted out. So not only is the concept idiotic, it does not even exist, except in the PR department briefs.

User friendliness is come to mean the aggregation and integration of multiple functionalities into single monolithic all powerful programs. This is one of the great myths of personal computing. There have never been any programs that make things all powerful, and integrated at the same time, which have also been extremely powerful, safe and easy to use. Perceptions of user friendliness have always been at loggerheads for deciding which functionalities to include into a single program. Making a single huge program will also mean providing a lot of configuration options, which is considered not to be user-friendly. Hence given the functionality in a integrated program, there are inevitably a number of ways to do all of it better with smaller more focused programs. This has indeed spawned a new industry offering tweaks and hacks into known software to extend and increase functionality. Integration is newbie friendly and seasoned unfriendly.

A by-product of user friendliness coupled with the closed software model has stressed on hiding the internals of the software from the user. This is evidently to protect the user from the program, and make sure that he is not intimidated by the software in any way. In fact it is more to hide the shoddy work of the programmer, hide all flaws under the hood, keep the user uninformed and keep that poor idiot permanently dependant on the service department and new product updates.

One really absurd interpretation of user friendliness is perceived to be lack of information on what the programming =is doing at a time. It is considered to the program to apparently freeze up rather than give information on what it is trying to accomplish. On the contrary what user friendly software should do is to keep the user informed about what it is achieving. Normally the user would ignore all that information. But this becomes absolutely invaluable in times of errors. Any such information would be very useful in diagnosing faults, and for first aid. With the programs denying the user such knowledge he is bound to the support and service for salvation.

One final measure of user friendliness is the lack if information about how the program works. Modern OSes provide a number of ways of achieving the same objective. The absolute lack of technical information in the help, and limiting help files to "click here" information is hardly friendly. It may be argued that such information is not newbie friendly. Depends on the capabilities of the newbie really, and what is achieved by not including it at all. A real I-dont-care-a-damn newbie will never look into it anyway, even if it were included. But not including it actually makes the program unfriendly as it would take away crucial information about the fitness of purpose of the software.

Evaluation of the need

Ask any layman who uses computers about what he believes to be the most important requirement of a computer system. More often than not he will come up with the cliched user-friendliness. But should it? In other words why should computer systems be user friendly?

This may seem blasphemy to all but hear me out. The fashionable thing to do today is to make things user friendly, this has, on one hand, led to the development of the click and do interfaces. It has also, on the other hand, led to the excommunication of geek-speak from products of computer science. I need to dwell upon this for a little perspective on the path my reasoning is going to follow.

Through out man's history we have seen a number of sciences grow and develop. Without fail we have seen each science develop its own language. Practitioners say it allows easy and quick communication. Sceptics say this is to protect the practitioners of the science for the laymen. No matter what the reason, the fact remains that this particular stream of computer science is being denied of use of its own language from its own products.

The point is simple. Computer science is just another stream of science. Practicing it needs just the same amount of learning and preparation that any other stream of science needs. And just because it has the ability to reach the masses should not take away from the fact that it needs the respect that any other branch of engineering gets. Its ease of use should not be held against it.

To take complete advantage of this argument it is necessary, like in any other branch of engineering to identify the various components of the user populace. We shall split the user populace into two categories. The first we shall call the "dummies", after the famous set of user manuals that flooded the market. These are users who are primarily concerned with not the product, but only what it does for them. These users are the ones who should be given the whole "ease of use benefit". The titles that these users use are primarily the end user software like the word processor, spread sheets, multimedia applications, browsing and P2P applications. This will also include the large quantity of applications for other fields of study. All software that is used for data processing by members of other branches of science, engineering and even art come into this category. This therefore forms not only the biggest segment, it also is the most elastic and responsive to the user-friendliness of software.

Unlike common perceptions this will also be the toughest segment to program for. Instead of the sorry software of today that is sold to this segment, there should be more really user friendliness built in. There needs to be a lot of improvement in the software, which actually means not following the present norms. The following is a set of guidelines for the kind of software that is expected to meet the needs of the dummies.

  • Software must be focused, and accomplish more with little effort, especially when it comes to engineering applications
  • The software must make things transparent to the user, as far as the actual implementation of the various options in the program goes. It is not necessary or desirable for the software to prevent the user from being able to do things differently, in a way he desires. One such idea is preventing
  • The software must keep the user in the loop, communication to the user through visual and audio signals denoting the function performed. The signals should be subtle enough to ignore. For example the status bar is a great place to give a lot of information on what the program is performing.
  • While it may be required to keep the initial configuration of the program simple, no attempt must be made to reduce the option set available to the user. Advance configuration options are a must.
  • Rather than keep the interface standard, stress should be on keeping the interface simple, uncluttered and intuitive.
  • Documentation is important. Just like it is desirable to keep the configuration simple, without reducing the available options, advanced information and advanced implementation should be discussed. Any issues that may be faced by the users must be documented. If a known bug exists, it should be listed. Any warnings to the user must be explicit and easy to understand.
  • The user must forever be in control of his own machine. Rather than annoying OK pop ups, more desirable is to provide logs, keep track of changes performed and give the advanced user power to see and undo any changes made to the system.

Then we have the "power-users". These are the users who are either directly related to the branch of Computer Science, or those who are well versed with the usage of software. The kind of software this segment will be using will be the high end server level software, like web servers, SQL servers etc. This will also include those from the dummies who have gained an insight into the working of their own software and are willing to experiment and learn more. This is a set of guidelines for this kind of software.

  • These users are not newbies and should not be assumed to be the lowest rung of the use populace. The interfaces should be designed using common configuration themes
  • Administration should necessarily be centralized and be powerful and flexible.
  • Information should never be hidden from these users. Configuration options should never be limited to a few option sheets. It is strongly advised to follow the configuration file method of setting options. The Administration console may be a front end for a limited set of options. If the console can list all the options it is welcome, but none should be dropped because of lack of space.
  • Alternate methods for similar tasks may be optional but no attempt must be made to make it transparent. The user should have the power to choose.
  • Documentation should be complete, easily accessible and also include relevant technical and implementation information.
  • Log files are a necessity and so are front end analyzers for those log files
  • The administrator should for ever be in control of the machine, under all circumstances

Having defined what we expect from the software, we shall answer the first question, why should computer systems be user friendly?

The only reason computer systems should be user-friendly is to perform their intended function, for the intended audience. True programs are written by correctly identifying the intended audience, and the function is equal proportion. No user-friendly software can be called such if it chooses to ignore one of these two aspects. No concept of friendliness makes sense without both these variables defined, user-friendliness does not exist in vacuum.

Windows and Linux

Based on the discussion above, how do Linux and Windows measure up as Working environments. We shall look at the two environments, one supposedly user friendly and the other user unfriendly. We shall then dispel some myths and then evaluate their positions on the same.

Windows enjoys the position of power in the personal computer segment. This has been to a great extent due to its early mover advantage. Also the perception that Windows is user-friendly had a very important role in this. But how did Windows approach user-friendliness. And is this approach justified?

Windows assumed all of it users to be a bunch of morons. To give credit, it assumed that all it users are those that do not want to spend time at all to spend on trying to know the system that they were using. This may be true to an extent but it sure is a flawed. Although Microsoft has become the biggest player in the PC segment, it sure has lost out on the more advanced markets like say the server segment.

Microsoft through Windows has been highly user focused. All of its decisions have been end user focused. And this end user has been the common man, one who does not want to learn about the tools that he uses. This is brilliant when it comes to marketing strategy, but is bad strategy for building software. They say that the problem with Apple has been that it hired engineers to do marketing. The problem with Microsoft has been the fact that it hired marketing guys to do the development. That is in a few words it. That is the reason for a decent user interface, not no intuitive layout and pathetic functionality. In its desperation for user acceptance it forgot the part that a software is defined by its functionality as well. So we have software that is highly conforming to the standards that it had set about user-friendliness, the functionality has been sacrificed. It is possible to rant about it, but a quick look at the reasons for Microsoft bashing does suggest this thing.

Also Microsoft's focus has been on the simplicity of the interface, that all their products are known for. So what does Microsoft gain by this. By making a simple uniform interface, it prevents product differentiation. It prevents people from differentiating between competing products, giving the choice of these products back to the developers of the OS itself, that is Microsoft.

So is this software user-friendly. According to our definition it is not, it is flawed. The software may be newbie friendly but it sure is not complete user-friendly software. Because it does not give the function of the software enough say in deciding the design of the software itself.

What about Linux. This can be viewed as two things. First Linux with its Unix roots, as a server application. And then it its new avatar of a desktop alternative.

Unix, has been the most powerful and widely used OS in the past. It is incredibly stable and powerful. As many testimonies will prove. But it had absolutely nothing to show for user friendliness. To use Unix one had to undergo a vast learning curve. This obviously gave it absolutely low newbie friendliness. But on the other hand it did have a lot to show for the friendliness with the focus on functionality, and power of use.

But things are changing with the new focus. Linux is with a new focus now. Under the new focus of desktop users it has developed a number of things. Linux has the most powerful and easy to use Windows Managers. It had graphical front ends for all the products that a simple user wants. It also has easy to use products for all kinds of users. What are described as products which are notoriously difficult, are actually intuitive. And then come with a tremendous amount of documentation.

Linux is special in the fact that it has products for all the levels of users. And it has no barricades against getting more information yourself. Information is the most important thing in a Linux system. All the supposedly cryptic commands are not for the everyday users. And then how can one use that to dismiss Linux as a desktop alternative. Linux it more than a desktop alternative alone, it is a server which can be more friendly than a Desktop OS.

So is Linux user-friendly. Not really. It does lack in most of the easy to use interfaces. It sure is newbie unfriendly. But the point to note is the focus. It is just not on either the program or the user. It is on both. Hence in the long run it is Linux alone that can have any truly user-friendly programs. If I were a punter, I would put my money on the Linux horse. It may be slow now, but it will only get faster.

Document Changes
September 11, 2001: Initial publishing of the article.
April 02, 2009: Spell check and cosmetic changes.

August 24, 2001

Analysis and Design of Reinforced Concrete Chimneys

Following is the entire contents of my B.Tech Project submitted, in Civil Engineering.

The paper deals with the various loads associated with the analysis of Reinforced Concrete Chimneys . It deals with the methodologies involved in estimating loads. It finally deals with the methods of design to account for the effects of these loads.


The present thesis deals with the analysis and design aspects of Reinforced Concrete Chimneys. The thesis reviews the various load effects that are incident upon tall free standing structures such as a chimney and the methods for estimation of the same using various codal provisions. Various loads are incident upon a chimney such as, wind loads, seismic loads and temperature loads etc. The codal provisions for the evaluation of the same have been studied and applied. Comparison has also been done between the values obtained of these load effects using the procedures outlined by various codes.

The design strength of the chimney cross sections has also been estimated. Design charts have also been prepared that can be used to ease the process of the design of the chimney cross sections and the usage exemplified.

A typical chimney of 250m has been analyzed and designed using the processes already outlined. Drawings have been prepared for the chimney. The foundation for the chimney too has been designed.

List of PDF files

Document Changes
August 24, 2001: First published.

August 23, 2001

Why I don't trust Microsoft: 'smart tags'

A rant on Microsoft, about their smart tags.

Edit: As it turns out, the annoyances of these smart tags have not entirely gone away, only morphed into a cross-browser technology. The difference is that now they are under the webmaster's control and generating income for them.

What follows is not my article. The real author's name is given below. I got this in a mail from our local LUG. Just put it up so that i could refer others to this. If anyone knows the trus source of the article please write to me. I will be more than glad to put up a credit.

Now Microsoft has come along with a "brilliant" idea. They want to piggyback their own selected content on top of your work. The idea is to have their products (such as Internet Explorer and the Office suite) scan web pages and documents for keywords and phrases known to the Microsoft. Any of these that are found would be underlined with a special purple "squiggle" to show that they are "smart tags".

Anyone viewing the page could then click on the smart tag and be transported to a Microsoft web site for more information. For example, you could write a web page about the Grand Canyon, and the phrase "Grand Canyon" could be underlined, allowing your visitors to check out the Expedia.Com page about how to book travel to the area.

Why does Microsoft want to do this? It's really very simple - to make an incredible amount of money. Look at it this way, Microsoft suddenly would have at their disposal every single document viewed with a new Microsoft product as a potential advertisement. Wow. That's power. No, this is an understatement of incredible magnitude. This is more than power - this is the harnessing of everyone's creative energy into a huge global advertising tool. It totally staggers the imagination.

You could be looking at a newspaper site, reading an article about train travel, and click on numerous links to Microsoft sites (and presumably third party sites which paid Microsoft for the privilege) selling train related products and services. If you read a classified ad on that same newspaper site selling an automobile, the word "Cadillac" could be underlined with a smart tag linking to a Cadillac dealer.

Content (the tags) are added dynamically to web pages by the browser without the permission of the person who created the pages (the webmaster or author). While strictly speaking this might not violate copyright laws (but it might be considered vandalism), it sure is rude. In fact, most people would consider it highly unethical.

As an example, suppose you bought a book through a book club. Before it was shipped to you, someone opened the book and examined every single page, adding comments here and there about how you could purchase this or get more information about that. You would be very annoyed if you were the author, you'd probably be livid if you were the publisher of the book, and you'd almost certainly return it if you were the customer.

Carefully crafted web pages whose look and feel has been lovingly built for countless hours by dedicated designers, authors, artists and webmasters would be randomly covered with trash by a company intent on siphoning away visitors to their own sites and pages.

And what about the problem of inappropriate content? Suppose you had a site which was against animal cruelty, yet Smart Tags went ahead and added to your pages links to other sites which sold muzzles for horses? You wouldn't like that very much, would you?

Another problem is that Smart Tags are "opt-out". This means the tags are inserted unless you (the webmaster or the user) indicate that you do not want them. Opt-Out is the preferred method of removal for many advertisers because they understand that most people will not bother to remove themselves from the list. Opt-in is the preferred method of most consumers because then they receive only what they have requested.

Webmasters can keep smart tags from working on their site by including a special "opt-out" metatag in the header of each and every page. I highly recommend that all webmasters include this tag to prevent smart tags from operating.

<meta name="MSSmartTagsPreventParsing" content="TRUE">

As soon as Smart Tags appeared in a beta release of Windows XP, the furor began. It was awesome to see. Microsoft was hit from all sides by just about everyone, because their intentions were so transparent and so blatantly monopolistic that even the most conservative could see what they were up to. The dangers caused a flood of protests to be received by the giant company, so many that Microsoft was forced to remove the feature from their products.

"As a result of smart tags in beta versions of Windows XP and IE, we received lots of feedback, and have realized that there is a need to better balance the user experience with the legitimate concerns of content providers and web sites," Microsoft said in a statement on June 28th, 2001.

Keep an eye on Microsoft, however, because they also added, "Microsoft remains committed to this type of technology, and will work closely with content providers and partners in the industry in the coming months to further refine how it can be used."

by Richard Lowe Jr.

Document Changes:
August 23, 2001: First version published

August 01, 2001

Do you need Linux? addendum

The following are some of the text, nostalgic, take from the Internet that show a history of Linux. Please do check the credits in the end.

Note: The following text was written by Linus on July 31 1992. It is a collection of various artifacts from the period in which Linux first began to take shape.

This is just a sentimental journey into some of the first posts concerning linux, so you can happily press Control-D now if you actually thought you'd get anything technical.

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix Subject: Gcc-1.40 and a posix-question
Message-ID: <1991Jul3.100050.9886@klaava.Helsinki.FI>
Date: 3 Jul 91 10:00:50 GMT
Hello netlanders,
Due to a project I'm working on (in minix), I'm interested in the posix standard definition. Could somebody please point me to a (preferably) machine-readable format of the latest posix rules? Ftp-sites would be nice.

The project was obviously linux, so by July 3rd I had started to think about actual user-level things: some of the device drivers were ready, and the harddisk actually worked. Not too much else. Just a success-report on porting gcc-1.40 to minix using the 1.37 version made by Alan W Black & co.

Linus Torvalds
PS. Could someone please try to finger me from overseas, as I've installed a "changing .plan" (made by your's truly), and I'm not certain it works from outside? It should report a new .plan every time.

Then, almost two months later, I actually had something working: I made sources for version 0.01 available on nic sometimes around this time. 0.01 sources weren't actually runnable: they were just a token gesture to arl who had probably started to despair about ever getting anything. This next post must have been from just a couple of weeks before that release.

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: What would you like to see most in minix?
Summary: small poll for my new operating system Message-ID:<1991Aug25.205708.9541@klaava.Helsinki.FI>
Date: 25 Aug 91 20:57:08 GMT
Organization: University of Helsinki
Hello everybody out there using minix - I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things). I've currently ported bash(1.08) and gcc(1.40), and things seem to work. This implies that I'll get something practical within a few months, and I'd like to know what features most people would want. Any suggestions are welcome, but I won't promise I'll implement them :-)

Linus (
PS. Yes - it's free of any minix code, and it has a multi-threaded fs. It is NOT protable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that's all I have :-(.

Judging from the post, 0.01 wasn't actually out yet, but it's close. I'd guess the first version went out in the middle of September -91. I got some responses to this (most by mail, which I haven't saved), and I even got a few mails asking to be beta-testers for linux. After that just a few general answers to quesions on the net:

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: Re: What would you like to see most in minix?
Summary: yes - it's nonportable
Message-ID: <1991Aug26.110602.19446@klaava.Helsinki.FI>
Date: 26 Aug 91 11:06:02 GMT Organization: University of Helsinki
In article <> jkp@cs.HUT.FI(Jyrki Kuoppala) writes:
>> [re: my post about my new OS]
> >Tell us more! Does it need a MMU?
Yes, it needs a MMU (sorry everybody), and it specifically
needs a 386/486 MMU (see later).
> >>PS. Yes - it's free of any minix code, and it has a multi-threaded fs.
>>>It is NOT protable (uses 386 task switching etc)
> >How much of it is in C? What difficulties will there be in porting?
>Nobody will believe you about non-portability ;-), and I for one would
>like to port it to my Amiga (Mach needs a MMU and Minix is not free).
Simply, I'd say that porting is impossible. It's mostly in C, but most people wouldn't call what I write C. It uses every conceivable feature of the 386 I could find, as it was also a project to teach me about the 386. As already mentioned, it uses a MMU, for both paging (not to disk yet) and segmentation. It's the segmentation that makes it REALLY 386 dependent (every task has a 64Mb segment for code & data - max 64 tasks in 4Gb. Anybody who needs more than 64Mb/task - tough cookies). It also uses every feature of gcc I could find, specifically the __asm__ directive, so that I wouldn't need so much assembly language objects. Some of my "C"-files (specifically mm.c) are almost as much assembler as C. It would be "interesting" even to port it to another compiler (though why anybody would want to use anything other than gcc is a mystery). Unlike minix, I also happen to LIKE interrupts, so interrupts are handled without trying to hide the reason behind them (I especially like my hard-disk-driver. Anybody else make interrupts drive a state- machine?). All in all it's a porters nightmare.
>As for the features; well, pseudo ttys, BSD sockets, user-mode
>filesystems (so I can say cat /dev/tcp/,
>window size in the tty structure, system calls capable of supporting
>POSIX.1. Oh, and bsd-style long file names.
Most of these seem possible (the tty structure already has stubs for window size), except maybe for the user-mode filesystems. As to POSIX, I'd be delighted to have it, but posix wants money for their papers, so that's not currently an option. In any case these are things that won't be supported for some time yet (first I'll make it a simple minix- lookalike, keyword SIMPLE).

Linus (
PS. To make things really clear - yes I can run gcc on it,and bash, and most of the gnu [bin/file]utilities, but it's not very debugged, and the library is really minimal. It doesn't even support floppy-disks yet. It won't be ready for distribution for a couple of months. Even then it probably won't be able to do much more than minix, and much less in some respects. It will be free though (probably under gnu-license or similar).

Well, obviously something worked on my machine: I doubt I had yet gotten gcc to compile itself under linux (or I would have been too proud of it not to mention it). Still before any release-date.
Then, October 5th, I seem to have released 0.02. As I already mentioned, 0.01 didn't actually come with any binaries: it was just source code for people interested in what linux looked like. Note the lack of announcement for 0.01: I wasn't too proud of it, so I think I only sent a note to everybody who had shown interest.

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: Free minix-like kernel sources for 386-AT
Message-ID: <1991Oct5.054106.4647@klaava.Helsinki.FI>
Date: 5 Oct 91 05:41:06 GMT
Organization: University of Helsinki
Do you pine for the nice days of minix-1.1, when men were men and wrote their own device drivers? Are you without a nice project and just dying to cut your teeth on a OS you can try to modify for your needs? Are you finding it frustrating when everything works on minix? No more all- nighters to get a nifty program working? Then this post might be just for you :-) As I mentioned a month(?) ago, I'm working on a free version of a minix-lookalike for AT-386 computers. It has finally reached the stage where it's even usable (though may not be depending on what you want), and I am willing to put out the sources for wider distribution. It is just version 0.02 (+1 (very small) patch already), but I've successfully run bash/gcc/gnu-make/gnu-sed/compress etc under it.
Sources for this pet project of mine can be found at ( in the directory /pub/OS/Linux. The directory also contains some README-file and a couple of binaries to work under linux (bash, update and gcc, what more can you ask for :-). Full kernel source is provided, as no minix code has been used. Library sources are only partially free, so that cannot be distributed currently. The system is able to compile "as-is" and has been known to work. Heh. Sources to the binaries (bash and gcc) can be found at the same place in /pub/gnu. ALERT! WARNING! NOTE! These sources still need minix-386 to be compiled (and gcc-1.40, possibly 1.37.1, haven't tested), and you need minix to set it up if you want to run it, so it is not yet a standalone system for those of you without minix. I'm working on it. You also need to be something of a hacker to set it up (?), so for those hoping for an alternative to minix-386, please ignore me. It is currently meant for hackers interested in operating systems and 386's with access to minix. The system needs an AT-compatible harddisk (IDE is fine) and EGA/VGA. If you are still interested, please ftp the README/RELNOTES, and/or mail me for additional info. I can (well, almost) hear you asking yourselves "why?". Hurd will be out in a year (or two, or next month, who knows), and I've already got minix. This is a program for hackers by a hacker. I've enjouyed doing it, and somebody might enjoy looking at it and even modifying it for their own needs. It is still small enough to understand, use and modify, and I'm looking forward to any comments you might have. I'm also interested in hearing from anybody who has written any of the utilities/library functions for minix. If your efforts are freely distributable (under copyright or even public domain), I'd like to hear from you, so I can add them to the system. I'm using Earl Chews estdio right now (thanks for a nice and working system Earl), and similar works will be very wellcome. Your (C)'s will of course be left intact. Drop me a line if you are willing to let me use your code.
PS. to PHIL NELSON! I'm unable to get through to you, and keep getting "forward error - strawberry unknown domain" or something.

Well, it doesn't sound like much of a system, does it? It did work, and some people even tried it out. There were several bad bugs (and there was no floppy-driver, no VM, no nothing), and 0.02 wasn't really very useable.
0.03 got released shortly thereafter (max 2-3 weeks was the time between releases even back then), and 0.03 was pretty useable. The next version was numbered 0.10, as things actually started to work pretty well. The next post gives some idea of what had happened in two months more...

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix Subject:
Re: Status of LINUX?
Summary: Still in beta
Message-ID: <1991Dec19.233545.8114@klaava.Helsinki.FI>
Date: 19 Dec 91 23:35:45 GMT
Organization: University of Helsinki
In article <>
(Miquel van Smoorenburg) writes:
>Hello *,
> I know some people are working on a FREE O/S for the 386/486,
>under the name Linux. I checked now and then, to see what was >happening. However, for the time being I am without FTP access so I don't
>know what is going on at the moment. Could someone please inform me about it?
>It's maybe best to follow up to this article, as I think that there are
>a lot of potential interested people reading this group. Note, that I don't
>really *have* a >= 386, but I'm sure in time I will.
Linux is still in beta (although available for brave souls by ftp), and has reached the version 0.11. It's still not as comprehensive as 386-minix, but better in some respects. The "Linux info-sheet" should be posted here some day by the person that keeps that up to date. In the meantime, I'll give some small pointers. First the bad news: - Still no SCSI: people are working on that, but no date yet. Thus you need a AT-interface disk (I have one report that it works on an EISA 486 with a SCSI disk that emulates the AT-interface, but that's more of a fluke than anything else: ISA+AT-disk is currently the hardware setup)

As you can see, 0.11 had already a small following. It wasn't much, but it did work.

- still no init/login: you get into bash as root upon bootup.

That was still standard in the next release.

- although I have a somewhat working VM (paging to disk), it's not ready yet. Thus linux needs at least 4M to be able to run the GNU binaries (especially gcc). It boots up in 2M, but you cannot compile.

I actually released a 0.11+VM version just before Christmas -91: I didn't need it myself, but people were trying to compile the kernel in 2MB and failing, so I had to implement it. The 0.11+VM version was available only to a small number of people that wanted to test it out: I'm still surprised it worked as well as it did.

- minix still has a lot more users: better support.
- it hasn't got years of testing by thousands of people, so there are probably quite a few bugs yet. Then for the good things..
- It's free (copyright by me, but freely distributable under a very lenient copyright)

The early copyright was in fact much more restrictive than the GNU copyleft: I didn't allow any money at all to change hands due to linux. That changed with 0.12.

- it's fun to hack on.
- /real/ multithreading filesystem.
- uses the 386-features. Thus locked into the 386/486 family, but it makes things clearer when you don't have to cater to other chips.
- a lot more... read my .plan. /I/ think it's better than minix, but I'm a bit prejudiced. It will never be the kind of professional OS that Hurd will be (in the next century or so :), but it's a nice learning tool (even more so than minix, IMHO), and it was/is fun working on it.
---- my .plan --------------------------
Free UNIX for the 386 - coming 4QR 91 or 1QR 92. The current version of linux is 0.11 - it has most things a unix kernel needs, and will probably be released as 1.0 as soon as it gets a little more testing, and we can get a init/login going. Currently you get dumped into a shell as root upon bootup. Linux can be gotten by anonymous ftp from '' ( in the directory '/pub/OS/Linux'. The same directory also contains some binary files to run under Linux. Currently gcc, bash, update, uemacs, tar, make and fileutils. Several people have gotten a running system, but it's still a hackers kernel. Linux still requires a AT-compatible disk to be useful: people are working on a SCSI-driver, but I don't know when it will be ready. There are now a couple of other sites containing linux, as people have had difficulties with connecting to nic. The sites are:
Tupac-Amaru.Informatik.RWTH-Aachen.DE ( directory /pub/msdos/replace ( directory /pub/linux
There is also a mailing list set up ''. To join, mail a request to ''.
It's no use mailing me: I have no actual contact with the mailing-list (other than being on it, naturally).
Mail me for more info:
0.11 has these new things:
- demand loading
- code/data sharing between unrelated processes
- much better floppy drivers (they actually work mostly)
- bug-corrections
- support for Hercules/MDA/CGA/EGA/VGA
- the console also beeps (WoW! Wonder-kernel :-)
- mkfs/fsck/fdisk
- US/German/French/Finnish keyboards
- settable line-speeds for com1/2

As you can see: 0.11 was actually stand-alone: I wrote the first mkfs/fsck/fdisk programs for it, so that you didn't need minix any more to set it up. Also, serial lines had been hard-coded to 2400bps, as that was all I had.

Still lacking:
- init/login
- rename system call
- named pipes
- symbolic links

Well, they are all there now: init/login didn't quite make it to 0.12, and rename() was implemented as a patch somewhere between 0.12 and 0.95. Symlinks were in 0.95, but named pipes didn't make it until 0.96.

0.12 will probably be out in January (15th or so), and will have:
- POSIX job control (by tytso)
- VM (paging to disk)
- Minor corrections

Actually, 0.12 was out January 5th, and contained major corrections. It was in fact a very stable kernel: it worked on a lot of new hardware, and there was no need for patches for a long time. 0.12 was also the kernel that "made it": that's when linux started to spread a lot faster. Earlier kernel releases were very much only for hackers: 0.12 actually worked quite well.

Note: The following document is a reply by Linus Torvalds, creator of Linux, in which he talks about his experiences in the early stages of Linux development

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Subject: Re: Writing an OS - questions !!
Date: 5 May 92 07:58:17 GMT
In article <> (V.Narayanan) writes:
Hi folks,
For quite some time this "novice" has been wondering as to how one goes about the task of writing an OS from "scratch". So here are some questions, and I would appreciate if you could take time to answer 'em.
Well, I see someone else already answered, but I thought I'd take on the linux-specific parts. Just my personal experiences, and I don't know how normal those are.

1) How would you typically debug the kernel during the development phase?
Depends on both the machine and how far you have gotten on the kernel: on more simple systems it's generally easier to set up. Here's what I had to do on a 386 in protected mode. The worst part is starting off: after you have even a minimal system you can use printf etc, but moving to protected mode on a 386 isn't fun, especially if you at first don't know the architecture very well. It's distressingly easy to reboot the system at this stage: if the 386 notices something is wrong, it shuts down and reboots - you don't even get a chance to see what's wrong. Printf() isn't very useful - a reboot also clears the screen, and anyway, you have to have access to video-mem, which might fail if your segments are incorrect etc. Don't even think about debuggers: no debugger I know of can follow a 386 into protected mode. A 386 emulator might do the job, or some heavy hardware, but that isn't usually feasible. What I used was a simple killing-loop: I put in statements like die:
jmp die
at strategic places. If it locked up, you were ok, if it rebooted, you knew at least it happened before the die-loop. Alternatively, you might use the sound io ports for some sound-clues, but as I had no experience with PC hardware, I didn't even use that. I'm not saying this is the only way: I didn't start off to write a kernel, I just wanted to explore the 386 task-switching primitives etc, and that's how I started off (in about April-91). After you have a minimal system up and can use the screen for output, it gets a bit easier, but that's when you have to enable interrupts. Bang, instant reboot, and back to the old way. All in all, it took about 2 months for me to get all the 386 things pretty well sorted out so that I no longer had to count on avoiding rebooting at once, and having the basic things set up (paging, timer-interrupt and a simple task-switcher to test out the segments etc).

2) Can you test the kernel functionality by running it as a process on a different OS?
Wouldn't the OS(the development environment) generate exceptions in cases when the kernel (of the new OS) tries to modify 'priviledged' registers? Yes, it's generally possible for some things, but eg device drivers usually have to be tested out on the bare machine. I used minix to develop linux, so I had no access to IO registers, interrupts etc. Under DOS it would have been possible to get access to all these, but then you don't have 32-bit mode. Intel isn't that great - it would probably have been much easier on a 68040 or similar. So after getting a simple task-switcher (it switched between two processes that printed AAAA... and BBBB... respectively by using the timer-interrupt - Gods I was proud over that), I still had to continue debugging basically by using printf. The first thing written was the keyboard driver: that's the reason it's still written completely in assembler (I didn't dare move to C yet - I was still debugging at about instruction-level). After that I wrote the serial drivers, and voila, I had a simple terminal program running (well, not that simple actually). It was still the same two processes (AAA..), but now they read and wrote to the console/serial lines instead. I had to reboot to get out of it all, but it was a simple kernel. After that is was plain sailing: hairy coding still, but I had some devices, and debugging was easier. I started using C at this stage, and it certainly speeds up developement. This is also when I start to get serious about my megalomaniac ideas to make "a better minix that minix". I was hoping I'd be able to recompile gcc under linux some day... The harddisk driver was more of the same: this time the problems with bad documentation started to crop up. The PC may be the most used architecture in the world right now, but that doesn't mean the docs are any better: in fact I haven't seen /any/ book even mentioning the weird 386-387 coupling in an AT etc (Thanks Bruce). After that, a small filesystem, and voila, you have a minimal unix. Two months for basic setups, but then only slightly longer until I had a disk-driver (seriously buggy, but it happened to work on my machine) and a small filesystem. That was about when I made 0.01 available (late august-91? Something like that): it wasn't pretty, it had no floppy driver, and it couldn't do much anything. I don't think anybody ever compiled that version. But by then I was hooked, and didn't want to stop until I could chuck out minix.

3) Would new linkers and loaders have to be written before you get a basic kernel running?
All versions up to about 0.11 were crosscompiled under minix386 - as were the user programs. I got bash and gcc eventually working under 0.02, and while a race-condition in the buffer-cache code prevented me from recompiling gcc with itself, I was able to tackle smaller compiles. 0.03 (October?) was able to recompile gcc under itself, and I think that's the first version that anybody else actually used. Still no floppies, but most of the basic things worked. Afetr 0.03 I decided that the next version was actually useable (it was, kind of, but boy is X under 0.96 more impressive), and I called the next version 0.10 (November?). It still had a rather serious bug in the buffer-cache handling code, but after patching that, it was pretty ok. 0.11 (December) had the first floppy driver, and was the point where I started doing linux developement under itself. Quite as well, as I trashed my minix386 partition by mistake when trying to autodial /dev/hd2. By that time others were actually using linux, and running out of memory. Especially sad was the fact that gcc wouldn't work on a 2MB machine, and although c386 was ported, it didn't do everything gcc did, and couldn't recompile the kernel. So I had to implement disk-paging: 0.12 came out in January (?) and had paging by me as well as job control by tytso (and other patches: pmacdona had started on VC's etc). It was the first release that started to have "non-essential" features, and being partly written by others. It was also the first release that actually did many things better than minix, and by now people started to really get interested. Then it was 0.95 in March, bugfixes in April, and soon 0.96. It's certainly been fun (and I trust will continue to be so) - reactions have been mostly very positive, and you do learn a lot doing this type of thing (on the other hand, your studies suffer in other respects :)


Document Changes:
August 1, 2001: First compiled version.

Do you need Linux?

Eventually the revolutionaries become the established culture, and then what will they do
-Linus torvalds

It is apt that I start such a topic with a quote from the man who was one of the biggest influences in the growth of the 'alternate' computing. If you are a person who keeps himself reasonably up to date with the world of the Internet, and the world of the computers in general, you would have heard that there is this thing called Linux, that is making quite a few, otherwise sane people, rather crazy.

You might also have realized that a lot of this crazy ire is directed at a company at Redmond, and at a person who is reported to have been the person who was in the garage that lead to the whole thing. What is all this? Why should you bother? Do you really need Linux?

A Revolutions Begins

Before we delve into questions such as these, a little background only helps develop a perspective. And it never hurts.

The commercial world that is prevalent in times such as now, have made a concept very important, "returns to investment". Such a concept has left in its wake a tremendous amount of muck and nonsense. Conventional wisdom says that if you spent some time doing something, then your effort must be adequately accounted for. It therefore does not make sense to give away hours of your time, which you could have spent in other ways more profitable, as a faceless, nameless entity who works so that others' world is made simpler and more productive.

That is what a number of people are now doing, as the many programmers for the present day miracle called Linux, whose users are an estimated 7 to 8 million. It all started in 1991, when Linus Torvalds a 21-year-old computer science student at the University of Helsinki, decided that his personal operating system, Minix, a Unix-look-alike operating system was not good enough. He was pretty sure he could write something better. So in a very brave and foolish attempt started to write his own. But in doing this he made am important decision, to involve people. He took the help of a number of the news groups at that time, to ask for ideas and guidance.

Then he did something unbelievable. He put up all his work on the Internet, source code et all, and asked for people to download the same and modify it to increase its functionality. What hap penned then was something that could not be predicted. The 'netlanders' decided to fall in love with this unique experiment of writing a fully free and multifunctional version of 'minix'. You will find a number of pre 1.0 release posts by Linus here. From a recognition of the fact that he needed the help of the unknown people to help him with his project, to the decision to put up the work as a tribute to all those who contributed, was the step that changed the face of things to come.

He was not alone in this. There were a number of other peoples' ideas hopes and aspirations he was carrying with him in his endeavor. But that was not what was driving him at that time. In fact Linus himself was very depreciatory of his work at that time. What started as a unique experiment of a man who was not satisfied with the way he found his environment and decided to do something about it.

With the help of hundreds of volunteers the progress of Linux continued. As the early 1.0 releases were completed and the kernel started of boasting of features of modern software. Parallelly continued the porting of the utilities of the GNU project to Linux. So now we have Linux/GNU as one of the best replacements of the standard desktop OS, Windows.

What is Linux?

Unlike commonly supposed, Linux is not the complete operating system. Somewhat like Windows not being the total system that you work on. Linux refers to the kernel that is run on the system. The kernel is the most important program in the Linux setup. It is a piece of very robust and beautifully written code that forms the interface between every other program and the hard work tat lies beneath. It is the most important component in the system, that does a number of tasks that allow the other programs to run easily without caring too much about various critical yet mundane tasks.

The Linux system on the other hand, that we normally refer to as Linux, is technically to be called Linux/GNU system. The GNU is a project that was started with a view to make the software totally free and to make it open source too. Many of the utilities that come bundled with the standard Linux installation are utilities written under the GNU project. Hence the combination of GNU and Linux is responsible for the revolution that is taking place now.

What revolution?

The trend of the PC was set by the company called Microsoft, with the release of the dos, which was a far departure from the existing scary entities running very powerful yet arrogant programs. The computer till then was limited to huge monsters, costing heaven and earth, manned by technicians, without having a concept called user friendliness. Into such an environment, coinciding with the advent of the smaller, cheaper, more powerful desktop machines, the disk operating system simplified the system so that everyone could use it. There lay the reason for the success of the modus operandi. With a focus so blinding on usability, somewhere along the line the basic reason for existence was forgotten. What started as a movement for the inclusion of the masses into computing, turned into a sorry excuse for sloppy programming, simple uniform interfaces and a general degradation of the entire computing cycle.

The world has seen in the form of the macintosh, an example of slick user interfaces. It had already seen in the Unix environment, power and stability. But under the guise of operability, it saw excuses. The Windows operating system, has maintained its look and feel since its first Windows 3.1 that came with the GUI. Its release of the Windows 95, was met with looks of astonishment by the members of the Apple tribe, who had that kind of operability almost 10 years ago. But for a sheer chance of location, Windows would not have made it to where it is now.

With its single minded fanaticism for easing things for the user, Windows has done something very peculiar. It has made the user end very easy and simple to use. It has also made basic programming so easy, that it is now possible for non-programmers to program too. Also with this heavy bias it has made the actual implementation by a professional programmer ridiculously complicated. The whole of the Windows software architecture, is so ridden with patches and workarounds that no coherent picture can be made of it.

The usability also introduces an important concept of backward compatibility. In a word this means that a windows programmer can never leave his past behind him and move on. No matter what he does he must support all that he has done, because of the remote possibility that someone might be using the software and might be inconvenienced. Add to this the sloppiness Windows code introduces. Not only do non-programmers program, they add to the headache of all those users of his code. The DLL based structure, supposed to make life easier became the trap that might just break up Windows.

This has introduced a vast intellectual gap between code developers and the users. This has necessitated such a difference in perception of the machine between the two, that the user hands over his machine to the developer, completely, when he wants to get a new program installed. With a lack of any credible system of security, it is possible for just about anyone to take control of a machine with a program written to do that. Getting a new program means running an installation program, and what it does is so transparent to the user, that he is not even aware it the program is a beneficial one or not. In fact even a discerning user might not have the means to know more. The sheer bias against involving the user in the maintenance of his own machine has led to phenomena such as the proliferation of the virii. No one knows, behind what interface lies what program that might be responsible for what damage.

The proliferation of the Windows environment has also led to a number of erroneous beliefs about the personal computer. One of them being the idea that a PC is personal, by implication no one else works on it, by implication no concept of privileges or restrictions. By a further implication, virii and trojans are a common place occurrence, one that is to be rectified rather than prevented. This hardly reflects reality. Even a truly personal system is used by more than on person at a time. And the lack of security safeguards has made it impossible for the owner to put in place any checks to safeguard his own machine.

A second one is the automatic acceptance of the irrationality of the computer. "It just happens that way", is a view a lot of Windows users keep. If restarting windows works, then that is the best method, and the only method. To work around a certain bug in the software, hit and miss policies are devised and stuck to. This gives the computer an aura of art, an aura of black magic. This makes the average user even more dependant on the system and hence on the vagaries of Windows.

And third, and a very serious idea, is the acceptance of the 'fact' that crashes and instability are an integral part of a system. Reinstalling Windows is a little trick right on the top of the charts. This has also lead to a sheer drop in the demands of performance of a system. "Too many windows cause a crash" is absurd because, being a multitasking operating system equipped with paging techniques, the Windows system was supposed to deal with such a demanding environment. Windows has been successful in shifting the onus of their programming blunders onto the users of the systems.

So why have we been accepting all this? A very valid question. There are several reasons. The first one being the fact that Windows was there first for the desktop users. Another is the tremendous job done by the Public Relations team at Redmond. What today go about under the name of Internet virii, email worms should effectively have been named Outlook worms and Windows virii. Any simple analysis would reveal that a host of these problems commonly associated with the Internet are in fact those that occur to those with Microsoft Software. Further if it were not for Microsoft's carefully worded user license agreement, which holds the company blameless for absolutely anything, they would probably have been awash in class action lawsuits by now.

This then is the revolution. The rebirth of the good old days of computing in a new avatar. The reincarnation of the power and stability of the good old Unix, on the desktop. The taming of the shrew so to speak. What was till yesterday a personal domain of the students of computer science is now being opened to the people at large. In short it is a chance for the average user to truly get the best he can afford. It is a chance for the user to be able to use the computer to its fullest capacity.

Does it effect me?

The development of the desktop for the Linux system is probably the main reason that has made this revolution a distinct possibility. Linux offers a lot, to every section of the computer user populace.

Are you a Novice user?

You will benefit from a new user interface. Linux offers all that you possibly want from the desktop. There are tools that allow you to create all the requirements of the office, that you have been used to.The Staroffice is a suite of tools that not only replicates all that you have been getting from the Microsoft Office suite, but will also help you in the process of migration by providing you with compatibility with the existing office formats, so that you can safely convert all your existing information to the new OS. There are also applications that allow you to create presentations, create and maintain spread sheets, and of course check mail. There are also a host of other applications like Personal Information Managers, that help you increase your productivity.

Are you an Inquisitive user?

You will benefit from the tremendous amount of information that is available on the topic of Linux and the related information about programming, and API calls etc. If you seriously do want to learn more about the machine that you are dealing with, Linux offers you a look beneath the hood, and lets you unleash the power of your box. It is a hands on approach to building your own personal machine. Also the stability of the system allows you to experiment. As a user you will have access to all those programs that are normally out of bounds for a Windows user. You get to install and administer Web and FTP servers, Mail servers, and a host of other applications that help you have a glimpse of the true power of Linux. For the icing of the cake, you will be privileged to look at the actual source code that runs your system.

Are you are Power user?

You will fall in love with Linux. The early Unix machines were built with networking in mind. All application were written without the assumption that its execution will be limited to a single terminal, and one user. You will have a host of application that allow web access. You will be able to run and maintain servers for Internet usage. You will be able to configure a variety of services that can be used by others, even if they are those unfortunates that run Windows. Web, mail (pop and smtp), FTP, telnet, rsh, proxy servers and others are some of the services that can be configured to run across networks. Given the status of Linux as an alternate Operating System, it comes bundled with a number of tools that allow cross platform access, including remote administration etc.

Are you are Programmer?

In spite of what Redmond tells you, you believe, as a true, hardcore programmer, that programming in C is the best way to do it. Or you may indeed have imbibed in you, the classic ways of Larry Wall. What ever be the case, Linux offers you a programming options that compare with the best in business. That old simple program, in C, that you believed was the most important thing to ever find its way through a keyboard, is suddenly useful again. Not only does Linux offer one of the richest set of API calls that an OS can offer, with the evolution of the new graphic interfaces, it also has a number of functions that allow you to program in the tremendously flexible and powerful graphical interface. Tool kits such as the gtk+ allow programmers to access the power of the GUI through native C code. And if you though you were better off without having to write code for all the mundane tasks like creation of interfaces etc, Linux comes in with tools to generate the interfaces with a click and drag approach, ala Visual Basic or Visual C++.

Can I afford it?

A tricky question really. Depends on what you mean by afford. But the answer can really be 'yes'.

If by 'afford' you mean the time required to change into a new Operating System, the answer certainly depends a lot on you as the user. There are currently a number of distributions that make the process of installation and maintenance an easy chore by providing user friendly and graphical interfaces wherever possible. Work in this field still is in progress, so even as we talk there would be newer more friendly interfaces released that probably take user friendliness to a new level.

All said and done there do exist problems to users. Linux still is under development. All that you get is the work of a number of people who did it in their spare times and have allowed you to use it for free. But it is these people who are always available to help you. There is a lot of information that is accessible regarding all the aspects of the Linux system. If you can afford some time to look up and solve your own problems, there is nothing to beat that pleasure. A little bit of information, a little bit of spare time and a little bit of creativity and the heart to experiment are all that is required for you to use Linux.

If by afford you mean the price, well what can I say. Other than the fact that whole of the Linux system is available free of charge. Most components are available under a very flexible licensing scheme called the GPL. This scheme does not prevent the commercialization of Linux components, but provides a mechanism that will protect the rights of the creators and at the same time allow you to tinker with the source code. There are a number of commercial versions also available that allow you to order a Linux distribution at a very reasonable price (Unlike the heaven and earth you have to pay for those from Redmond) and also provide you with a limited amount of after sales service too.

When do I start... now?

You do not need to be an expert to use Windows. At the same time no matter how much you use Windows, you will never be called an expert. This is because the Windows environment was not created to give you expertise of any kind. Again you do not need to be an expert to start using Linux, but once you are done with it, you will be one hell of an expert.

Some of the early pioneers

In the early 1980s, Richard Stallman founded the GNU Project, an attempt to build a free operating system based on Unix, and the Free Software Foundation, dedicated to promoting open source software. He's also the brains behind the GNU General Public License, or "copylefting."

Larry Wall wrote the first version of Perl in 1987.

Andrew Tanenbaum released Minix, upon which Linux is based, in 1987.

David Greenman served as principal architect on the FreeBSD team in 1993.

Brian Behlendorf was the chief engineer of the team that built the Apache Web server in 1995.

Document Changes
August 01, 2001: First version.
April 02, 2009: Minor updates & corrections.

July 24, 2001


Wadiwrk4 is a Public domain Dos based utility that can be used for the purpose of analyzing Water distribution Networks.

In the words of the creators the program is officially called
WADISO - Water Distribution System Optimization.
Version - March 19,1987
Public Domain Program Developed by GESSKER, SJOSTROM & WALSKI, U.S. Army Corps Of Engineers - Water Ways Experimentation Station.

This paper deals with the following topics

  • Introduction
  • Requirements and Usage
  • Preparation of Data file
  • Running the Program
  • Interpreting the Output
  • More information
  • Trouble shooting


The program WADIWRK4 (henceforth called the program) is a dos based menu driven program. It accepts input from the user, that includes the details of the pipe network he wants to solve. It then allows the user to solve the network and output the result for inspection. The user can then check and change the system so that he gets the desired result. This is the interactive method of running the program.

Requirements and Usage

This method of usage is satisfactory but can be very painful if done for large networks. So knowing the execution pattern of the program it is possible to automate this process in a very intuitive way.

To run the program automatically, we need to prepare a data file that will contain text in it in the exact format as required to be typed when the program is running in the interactive mode. If this were then to be sent to the program as if it were being typed, the program can be run repeatedly by changing this prepared input file.

The required format is given below with commentary as required. The user is however urged to run the program normally in its interactive format so as to be familiar with the program and understand fully the workings of various symbols in the data file.

To proceed one must have a list of details regarding the network that has to be solved. It would be helpful to know the following in advance.

  • Various nodes denoted by numbers (less than a maximum 0f 800)
  • Various pipes identified (not necessarily uniquely) by the starting and the ending node, and by a number (uniquely).
  • The details of various pipes including diameter (inches) and length (feet) and the Hazen Williams Coefficient (To denote the roughness of the pipe) (representative values are about 100 to 150, but vary from case to case).
  • Elevations of the various nodes
  • Demands at the various nodes. (Note that the program cannot handle pipe demands and needs them to be converted to nodal demands).
  • water level height
  • Peak Load factor (connecting normal demand to the peak allowed demand)

Preparation of Data file

For the purpose of creation of the data file we will consider a simple network, shown below.

Given below is sample input file.

*C This is a comment
This is heading
1 1 2 3.15 40 150
2 1 3 3.15 30 130
3 2 3 3.15 50 120
1 10
2 11.5
3 11
1 10
2 12
3 21
ACCURACY .01 .01

The first line can be used as a comment. Use *C to do that. But the program does not take kindly to comments in between the program input. So take care.

The keyword NOM stands for "No messages". You would not normally use this when you are running the program interactively, but this helps keep the prompts to a minimum during the kind of execution we are looking at, so that our output is not cluttered.

The next two numbers are the corresponding menu options in the program. One for starting a new job and the second for starting input. After these two commands the program is ready to take your input.

The first piece of input you give is the name of the job (the heading).

Then you start entering the pipe network details. They can be entered in the format shown below but the maximum width of the fields cannot exceed that given below. To restate, it is enough to delimit the input with spaces, and not necessary to follow a strict column delimitation. In any case the maximum width per field is governed as below


To interpret the earlier letters - each set of letters is a field width, and the blank is mandatory. It is possible to have a smaller width NOT larger.

The various fields (as delimited by the format given above are:

Pipe# BeginningNode EndingNode PipeDiameter(in) PipeLength(m) HazznWillmCoeff

Then we have the keyword ELEVATION, that signals the program to interpret the following input the heights if the various nodes. The format and the data types are as given below.

Node# Elevation(feet)

The keyword OUTPUT comes next to signal the beginning the section that deals with the nodal demands (hence the word output). The format is given below.

Node# Demand(gallons/min)

The next keyword is TANK that denotes the following node number with the status of a supply point and also the head available. The format is


Where the two inputs are Node# and Pressure head(feet) of water at the tank.

The next Keyword is RATIO. This is a ratio of the peak demand to the normal demand. While giving input we take all the demands to be normal demands. This factor allows us to scale the demand a certain number of times. Use 1 if you don't want to scale the demand. This is especially useful to change the total demand proportionally at a later date, without having to change all the demand values. The format is


Where the number is the scaling factor

The Last input keyword defines to what accuracy the calculations have to be performed. The smaller the number better is the accuracy realized. This is however not very critical in a water distribution network, as such accuracies are seldom realized with the kind of items used. Use the values given below. Note that an additional(optional) factor has been left from the listing above. This is the number of times the network has to be solved (iterated) before giving up (in case the accuracies are not reached. This is by default 25 and is not necessary to change it. Specifying too low a value however can cause the iteration to stop before reaching its conclusion. The format is

ACCURACY AccuracyPipes AccuracyNodes MaxIterations

The END keyword causes the program to stop the input section. It takes no other input and looks like below.


The input part is done and it is necessary to run the program, The command given below (that is zero Capital'C') is used to ask the program to balance the system. This causes the input to be used to calculate the network. This will also output the table if the input is right. This output has to be trapped by you so that you can see it later. The process will be explained later.


Finally the quit command. This will close the program returning a 0 (zero)to the system.


Running the Program

Before the executing the program you must prepare the input file. Prepare the file in the format shown above. Use an editor like 'edit' or the 'notepad'. Desist from using the word or the wordpad unless you are sure to be able to save the program in pure text format. Discussion of the details is beyond this paper but the program will fail miserably if the prepared data file is not a text only file. Save the file with a .wad extension. (In notepad you have to enclose the name in quotes "input.wad" <-- like this.)Now you are ready to run the program.

To execute the program type the following on the dos prompt substituting the input.wad to the name you used to save the file. Prefer the .wad extension.

C:\>wadiwrk4 < input.wad

This is assuming both the program and the file are in a single directory. If not create a new directory and copy both the program (WADIWRK$.EXE) and the input file (your '.wad' file). Then cd to that directory and type the above.

If your file is written properly then the program will run causing output to be printed on the screen. (If it does not run go to the trouble shooting section. If you seem to getting some output but it is going so fast that you cannot read it then you need the store the output somewhere. To do this you can store the output in a file called 'output.out' by doing the following.

C:\>wadiwrk4 < input.wad >output.out

Now the output will stored in the file output.out (and will not flash on the screen) Open the output file (using similar tools you used to make the input files) and read through carefully.

What is happening above is that the first part of the statement causes the file input.wad to be read as a text file and it is sent to the program as if someone is typing it on the console. So the program does not differentiate between the two modes of input. DOS is tricking the program to think that the whole process is interactive. And using NOM will ensure that there is no junk in the output (in the form of the various prompts). The second part is that using the '>' symbol we are telling DOS in intercept all output meant for the screen and put it in a file for us to see. so the text in output.out is exactly the same as that printed on the screen.

Interpreting the Output

The output is self explanatory and very well given. But here are a few pointers. The first part is an echo of the input. (Remember DOS is putting everything meant for the screen into this file. The input is just you typing it in.) There will be two tables (split up if necessary) that contain all the items for the nodes and the pipes. Of particular use may be the water head values at the nodes and the velocity values for the pipes. Check for them to satisfy your requirements.

More Information

Go to the site for some more information.

Go to the site for downloading wadiwrk 4.1 evaluation version.

Trouble Shooting

Check the following if you have a problem.

Problem: Program not executing (perhaps exiting with a error)

  • Did you enter the command exactly as shown?
  • Check the redirection symbols (the '<' and the '>' signs?
  • Did you do a mistake earlier that wiped out your input file?
Problem: Simula error
  • Check the input file to see that all the commands (the numbers) are correct. There could be a missing command that is causing the input to abnormally terminate.
  • Check all the keywords. Did you by any chance leave them in small letters, or do you by any chance have a '.' after one of them?
Problem: No Proper output
  • Go through the output file. Check for any place where it gives an error.
  • Check the input for correctness
  • Check that there are no spaces or blank lines
  • Check to see that the field limit is not exceeded anywhere

Wadiwrk is a decent program for network analysis, and despite its clunky interface - worth learning.