August 24, 2001

Analysis and Design of Reinforced Concrete Chimneys

Following is the entire contents of my B.Tech Project submitted, in Civil Engineering.

The paper deals with the various loads associated with the analysis of Reinforced Concrete Chimneys . It deals with the methodologies involved in estimating loads. It finally deals with the methods of design to account for the effects of these loads.

Abstract:

The present thesis deals with the analysis and design aspects of Reinforced Concrete Chimneys. The thesis reviews the various load effects that are incident upon tall free standing structures such as a chimney and the methods for estimation of the same using various codal provisions. Various loads are incident upon a chimney such as, wind loads, seismic loads and temperature loads etc. The codal provisions for the evaluation of the same have been studied and applied. Comparison has also been done between the values obtained of these load effects using the procedures outlined by various codes.

The design strength of the chimney cross sections has also been estimated. Design charts have also been prepared that can be used to ease the process of the design of the chimney cross sections and the usage exemplified.

A typical chimney of 250m has been analyzed and designed using the processes already outlined. Drawings have been prepared for the chimney. The foundation for the chimney too has been designed.

List of PDF files

Document Changes
August 24, 2001: First published.

August 23, 2001

Why I don't trust Microsoft: 'smart tags'

A rant on Microsoft, about their smart tags.

Edit: As it turns out, the annoyances of these smart tags have not entirely gone away, only morphed into a cross-browser technology. The difference is that now they are under the webmaster's control and generating income for them.

What follows is not my article. The real author's name is given below. I got this in a mail from our local LUG. Just put it up so that i could refer others to this. If anyone knows the trus source of the article please write to me. I will be more than glad to put up a credit.

Now Microsoft has come along with a "brilliant" idea. They want to piggyback their own selected content on top of your work. The idea is to have their products (such as Internet Explorer and the Office suite) scan web pages and documents for keywords and phrases known to the Microsoft. Any of these that are found would be underlined with a special purple "squiggle" to show that they are "smart tags".

Anyone viewing the page could then click on the smart tag and be transported to a Microsoft web site for more information. For example, you could write a web page about the Grand Canyon, and the phrase "Grand Canyon" could be underlined, allowing your visitors to check out the Expedia.Com page about how to book travel to the area.

Why does Microsoft want to do this? It's really very simple - to make an incredible amount of money. Look at it this way, Microsoft suddenly would have at their disposal every single document viewed with a new Microsoft product as a potential advertisement. Wow. That's power. No, this is an understatement of incredible magnitude. This is more than power - this is the harnessing of everyone's creative energy into a huge global advertising tool. It totally staggers the imagination.

You could be looking at a newspaper site, reading an article about train travel, and click on numerous links to Microsoft sites (and presumably third party sites which paid Microsoft for the privilege) selling train related products and services. If you read a classified ad on that same newspaper site selling an automobile, the word "Cadillac" could be underlined with a smart tag linking to a Cadillac dealer.

Content (the tags) are added dynamically to web pages by the browser without the permission of the person who created the pages (the webmaster or author). While strictly speaking this might not violate copyright laws (but it might be considered vandalism), it sure is rude. In fact, most people would consider it highly unethical.

As an example, suppose you bought a book through a book club. Before it was shipped to you, someone opened the book and examined every single page, adding comments here and there about how you could purchase this or get more information about that. You would be very annoyed if you were the author, you'd probably be livid if you were the publisher of the book, and you'd almost certainly return it if you were the customer.

Carefully crafted web pages whose look and feel has been lovingly built for countless hours by dedicated designers, authors, artists and webmasters would be randomly covered with trash by a company intent on siphoning away visitors to their own sites and pages.

And what about the problem of inappropriate content? Suppose you had a site which was against animal cruelty, yet Smart Tags went ahead and added to your pages links to other sites which sold muzzles for horses? You wouldn't like that very much, would you?

Another problem is that Smart Tags are "opt-out". This means the tags are inserted unless you (the webmaster or the user) indicate that you do not want them. Opt-Out is the preferred method of removal for many advertisers because they understand that most people will not bother to remove themselves from the list. Opt-in is the preferred method of most consumers because then they receive only what they have requested.

Webmasters can keep smart tags from working on their site by including a special "opt-out" metatag in the header of each and every page. I highly recommend that all webmasters include this tag to prevent smart tags from operating.

<meta name="MSSmartTagsPreventParsing" content="TRUE">

As soon as Smart Tags appeared in a beta release of Windows XP, the furor began. It was awesome to see. Microsoft was hit from all sides by just about everyone, because their intentions were so transparent and so blatantly monopolistic that even the most conservative could see what they were up to. The dangers caused a flood of protests to be received by the giant company, so many that Microsoft was forced to remove the feature from their products.

"As a result of smart tags in beta versions of Windows XP and IE, we received lots of feedback, and have realized that there is a need to better balance the user experience with the legitimate concerns of content providers and web sites," Microsoft said in a statement on June 28th, 2001.

Keep an eye on Microsoft, however, because they also added, "Microsoft remains committed to this type of technology, and will work closely with content providers and partners in the industry in the coming months to further refine how it can be used."

by Richard Lowe Jr.

Document Changes:
August 23, 2001: First version published

August 01, 2001

Do you need Linux? addendum

The following are some of the text, nostalgic, take from the Internet that show a history of Linux. Please do check the credits in the end.

Note: The following text was written by Linus on July 31 1992. It is a collection of various artifacts from the period in which Linux first began to take shape.


This is just a sentimental journey into some of the first posts concerning linux, so you can happily press Control-D now if you actually thought you'd get anything technical.

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix Subject: Gcc-1.40 and a posix-question
Message-ID: <1991Jul3.100050.9886@klaava.Helsinki.FI>
Date: 3 Jul 91 10:00:50 GMT
Hello netlanders,
Due to a project I'm working on (in minix), I'm interested in the posix standard definition. Could somebody please point me to a (preferably) machine-readable format of the latest posix rules? Ftp-sites would be nice.


The project was obviously linux, so by July 3rd I had started to think about actual user-level things: some of the device drivers were ready, and the harddisk actually worked. Not too much else. Just a success-report on porting gcc-1.40 to minix using the 1.37 version made by Alan W Black & co.

Linus Torvalds torvalds@kruuna.helsinki.fi
PS. Could someone please try to finger me from overseas, as I've installed a "changing .plan" (made by your's truly), and I'm not certain it works from outside? It should report a new .plan every time.


Then, almost two months later, I actually had something working: I made sources for version 0.01 available on nic sometimes around this time. 0.01 sources weren't actually runnable: they were just a token gesture to arl who had probably started to despair about ever getting anything. This next post must have been from just a couple of weeks before that release.

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: What would you like to see most in minix?
Summary: small poll for my new operating system Message-ID:<1991Aug25.205708.9541@klaava.Helsinki.FI>
Date: 25 Aug 91 20:57:08 GMT
Organization: University of Helsinki
Hello everybody out there using minix - I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things). I've currently ported bash(1.08) and gcc(1.40), and things seem to work. This implies that I'll get something practical within a few months, and I'd like to know what features most people would want. Any suggestions are welcome, but I won't promise I'll implement them :-)

Linus (torvalds@kruuna.helsinki.fi)
PS. Yes - it's free of any minix code, and it has a multi-threaded fs. It is NOT protable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that's all I have :-(.


Judging from the post, 0.01 wasn't actually out yet, but it's close. I'd guess the first version went out in the middle of September -91. I got some responses to this (most by mail, which I haven't saved), and I even got a few mails asking to be beta-testers for linux. After that just a few general answers to quesions on the net:

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: Re: What would you like to see most in minix?
Summary: yes - it's nonportable
Message-ID: <1991Aug26.110602.19446@klaava.Helsinki.FI>
Date: 26 Aug 91 11:06:02 GMT Organization: University of Helsinki
In article <1991Aug25.234450.22562@nntp.hut.fi> jkp@cs.HUT.FI(Jyrki Kuoppala) writes:
>> [re: my post about my new OS]
> >Tell us more! Does it need a MMU?
Yes, it needs a MMU (sorry everybody), and it specifically
needs a 386/486 MMU (see later).
> >>PS. Yes - it's free of any minix code, and it has a multi-threaded fs.
>>>It is NOT protable (uses 386 task switching etc)
> >How much of it is in C? What difficulties will there be in porting?
>Nobody will believe you about non-portability ;-), and I for one would
>like to port it to my Amiga (Mach needs a MMU and Minix is not free).
Simply, I'd say that porting is impossible. It's mostly in C, but most people wouldn't call what I write C. It uses every conceivable feature of the 386 I could find, as it was also a project to teach me about the 386. As already mentioned, it uses a MMU, for both paging (not to disk yet) and segmentation. It's the segmentation that makes it REALLY 386 dependent (every task has a 64Mb segment for code & data - max 64 tasks in 4Gb. Anybody who needs more than 64Mb/task - tough cookies). It also uses every feature of gcc I could find, specifically the __asm__ directive, so that I wouldn't need so much assembly language objects. Some of my "C"-files (specifically mm.c) are almost as much assembler as C. It would be "interesting" even to port it to another compiler (though why anybody would want to use anything other than gcc is a mystery). Unlike minix, I also happen to LIKE interrupts, so interrupts are handled without trying to hide the reason behind them (I especially like my hard-disk-driver. Anybody else make interrupts drive a state- machine?). All in all it's a porters nightmare.
>As for the features; well, pseudo ttys, BSD sockets, user-mode
>filesystems (so I can say cat /dev/tcp/kruuna.helsinki.fi/finger),
>window size in the tty structure, system calls capable of supporting
>POSIX.1. Oh, and bsd-style long file names.
Most of these seem possible (the tty structure already has stubs for window size), except maybe for the user-mode filesystems. As to POSIX, I'd be delighted to have it, but posix wants money for their papers, so that's not currently an option. In any case these are things that won't be supported for some time yet (first I'll make it a simple minix- lookalike, keyword SIMPLE).

Linus (torvalds@kruuna.helsinki.fi)
PS. To make things really clear - yes I can run gcc on it,and bash, and most of the gnu [bin/file]utilities, but it's not very debugged, and the library is really minimal. It doesn't even support floppy-disks yet. It won't be ready for distribution for a couple of months. Even then it probably won't be able to do much more than minix, and much less in some respects. It will be free though (probably under gnu-license or similar).


Well, obviously something worked on my machine: I doubt I had yet gotten gcc to compile itself under linux (or I would have been too proud of it not to mention it). Still before any release-date.
Then, October 5th, I seem to have released 0.02. As I already mentioned, 0.01 didn't actually come with any binaries: it was just source code for people interested in what linux looked like. Note the lack of announcement for 0.01: I wasn't too proud of it, so I think I only sent a note to everybody who had shown interest.

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: Free minix-like kernel sources for 386-AT
Message-ID: <1991Oct5.054106.4647@klaava.Helsinki.FI>
Date: 5 Oct 91 05:41:06 GMT
Organization: University of Helsinki
Do you pine for the nice days of minix-1.1, when men were men and wrote their own device drivers? Are you without a nice project and just dying to cut your teeth on a OS you can try to modify for your needs? Are you finding it frustrating when everything works on minix? No more all- nighters to get a nifty program working? Then this post might be just for you :-) As I mentioned a month(?) ago, I'm working on a free version of a minix-lookalike for AT-386 computers. It has finally reached the stage where it's even usable (though may not be depending on what you want), and I am willing to put out the sources for wider distribution. It is just version 0.02 (+1 (very small) patch already), but I've successfully run bash/gcc/gnu-make/gnu-sed/compress etc under it.
Sources for this pet project of mine can be found at nic.funet.fi (128.214.6.100) in the directory /pub/OS/Linux. The directory also contains some README-file and a couple of binaries to work under linux (bash, update and gcc, what more can you ask for :-). Full kernel source is provided, as no minix code has been used. Library sources are only partially free, so that cannot be distributed currently. The system is able to compile "as-is" and has been known to work. Heh. Sources to the binaries (bash and gcc) can be found at the same place in /pub/gnu. ALERT! WARNING! NOTE! These sources still need minix-386 to be compiled (and gcc-1.40, possibly 1.37.1, haven't tested), and you need minix to set it up if you want to run it, so it is not yet a standalone system for those of you without minix. I'm working on it. You also need to be something of a hacker to set it up (?), so for those hoping for an alternative to minix-386, please ignore me. It is currently meant for hackers interested in operating systems and 386's with access to minix. The system needs an AT-compatible harddisk (IDE is fine) and EGA/VGA. If you are still interested, please ftp the README/RELNOTES, and/or mail me for additional info. I can (well, almost) hear you asking yourselves "why?". Hurd will be out in a year (or two, or next month, who knows), and I've already got minix. This is a program for hackers by a hacker. I've enjouyed doing it, and somebody might enjoy looking at it and even modifying it for their own needs. It is still small enough to understand, use and modify, and I'm looking forward to any comments you might have. I'm also interested in hearing from anybody who has written any of the utilities/library functions for minix. If your efforts are freely distributable (under copyright or even public domain), I'd like to hear from you, so I can add them to the system. I'm using Earl Chews estdio right now (thanks for a nice and working system Earl), and similar works will be very wellcome. Your (C)'s will of course be left intact. Drop me a line if you are willing to let me use your code.
Linus
PS. to PHIL NELSON! I'm unable to get through to you, and keep getting "forward error - strawberry unknown domain" or something.


Well, it doesn't sound like much of a system, does it? It did work, and some people even tried it out. There were several bad bugs (and there was no floppy-driver, no VM, no nothing), and 0.02 wasn't really very useable.
0.03 got released shortly thereafter (max 2-3 weeks was the time between releases even back then), and 0.03 was pretty useable. The next version was numbered 0.10, as things actually started to work pretty well. The next post gives some idea of what had happened in two months more...

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix Subject:
Re: Status of LINUX?
Summary: Still in beta
Message-ID: <1991Dec19.233545.8114@klaava.Helsinki.FI>
Date: 19 Dec 91 23:35:45 GMT
Organization: University of Helsinki
In article <469@htsa.htsa.aha.nl> miquels@maestro.htsa.aha.nl
(Miquel van Smoorenburg) writes:
>Hello *,
> I know some people are working on a FREE O/S for the 386/486,
>under the name Linux. I checked nic.funet.fi now and then, to see what was >happening. However, for the time being I am without FTP access so I don't
>know what is going on at the moment. Could someone please inform me about it?
>It's maybe best to follow up to this article, as I think that there are
>a lot of potential interested people reading this group. Note, that I don't
>really *have* a >= 386, but I'm sure in time I will.
Linux is still in beta (although available for brave souls by ftp), and has reached the version 0.11. It's still not as comprehensive as 386-minix, but better in some respects. The "Linux info-sheet" should be posted here some day by the person that keeps that up to date. In the meantime, I'll give some small pointers. First the bad news: - Still no SCSI: people are working on that, but no date yet. Thus you need a AT-interface disk (I have one report that it works on an EISA 486 with a SCSI disk that emulates the AT-interface, but that's more of a fluke than anything else: ISA+AT-disk is currently the hardware setup)


As you can see, 0.11 had already a small following. It wasn't much, but it did work.

- still no init/login: you get into bash as root upon bootup.


That was still standard in the next release.

- although I have a somewhat working VM (paging to disk), it's not ready yet. Thus linux needs at least 4M to be able to run the GNU binaries (especially gcc). It boots up in 2M, but you cannot compile.


I actually released a 0.11+VM version just before Christmas -91: I didn't need it myself, but people were trying to compile the kernel in 2MB and failing, so I had to implement it. The 0.11+VM version was available only to a small number of people that wanted to test it out: I'm still surprised it worked as well as it did.

- minix still has a lot more users: better support.
- it hasn't got years of testing by thousands of people, so there are probably quite a few bugs yet. Then for the good things..
- It's free (copyright by me, but freely distributable under a very lenient copyright)


The early copyright was in fact much more restrictive than the GNU copyleft: I didn't allow any money at all to change hands due to linux. That changed with 0.12.

- it's fun to hack on.
- /real/ multithreading filesystem.
- uses the 386-features. Thus locked into the 386/486 family, but it makes things clearer when you don't have to cater to other chips.
- a lot more... read my .plan. /I/ think it's better than minix, but I'm a bit prejudiced. It will never be the kind of professional OS that Hurd will be (in the next century or so :), but it's a nice learning tool (even more so than minix, IMHO), and it was/is fun working on it.
Linus
(torvalds@kruuna.helsinki.fi)
---- my .plan --------------------------
Free UNIX for the 386 - coming 4QR 91 or 1QR 92. The current version of linux is 0.11 - it has most things a unix kernel needs, and will probably be released as 1.0 as soon as it gets a little more testing, and we can get a init/login going. Currently you get dumped into a shell as root upon bootup. Linux can be gotten by anonymous ftp from 'nic.funet.fi' (128.214.6.100) in the directory '/pub/OS/Linux'. The same directory also contains some binary files to run under Linux. Currently gcc, bash, update, uemacs, tar, make and fileutils. Several people have gotten a running system, but it's still a hackers kernel. Linux still requires a AT-compatible disk to be useful: people are working on a SCSI-driver, but I don't know when it will be ready. There are now a couple of other sites containing linux, as people have had difficulties with connecting to nic. The sites are:
Tupac-Amaru.Informatik.RWTH-Aachen.DE (137.226.112.31): directory /pub/msdos/replace
tsx-11.mit.edu (18.172.1.2): directory /pub/linux
There is also a mailing list set up 'Linux-activists@niksula.hut.fi'. To join, mail a request to 'Linux-activists-request@niksula.hut.fi'.
It's no use mailing me: I have no actual contact with the mailing-list (other than being on it, naturally).
Mail me for more info:
Linus
(torvalds@kruuna.Helsinki.FI)
0.11 has these new things:
- demand loading
- code/data sharing between unrelated processes
- much better floppy drivers (they actually work mostly)
- bug-corrections
- support for Hercules/MDA/CGA/EGA/VGA
- the console also beeps (WoW! Wonder-kernel :-)
- mkfs/fsck/fdisk
- US/German/French/Finnish keyboards
- settable line-speeds for com1/2


As you can see: 0.11 was actually stand-alone: I wrote the first mkfs/fsck/fdisk programs for it, so that you didn't need minix any more to set it up. Also, serial lines had been hard-coded to 2400bps, as that was all I had.

Still lacking:
- init/login
- rename system call
- named pipes
- symbolic links

Well, they are all there now: init/login didn't quite make it to 0.12, and rename() was implemented as a patch somewhere between 0.12 and 0.95. Symlinks were in 0.95, but named pipes didn't make it until 0.96.

0.12 will probably be out in January (15th or so), and will have:
- POSIX job control (by tytso)
- VM (paging to disk)
- Minor corrections


Actually, 0.12 was out January 5th, and contained major corrections. It was in fact a very stable kernel: it worked on a lot of new hardware, and there was no need for patches for a long time. 0.12 was also the kernel that "made it": that's when linux started to spread a lot faster. Earlier kernel releases were very much only for hackers: 0.12 actually worked quite well.


Note: The following document is a reply by Linus Torvalds, creator of Linux, in which he talks about his experiences in the early stages of Linux development

To: Linux-Activists@BLOOM-PICAYUNE.MIT.EDU
From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Subject: Re: Writing an OS - questions !!
Date: 5 May 92 07:58:17 GMT
In article <10685@inews.intel.com> nani@td2cad.intel.com (V.Narayanan) writes:
Hi folks,
For quite some time this "novice" has been wondering as to how one goes about the task of writing an OS from "scratch". So here are some questions, and I would appreciate if you could take time to answer 'em.
Well, I see someone else already answered, but I thought I'd take on the linux-specific parts. Just my personal experiences, and I don't know how normal those are.

1) How would you typically debug the kernel during the development phase?
Depends on both the machine and how far you have gotten on the kernel: on more simple systems it's generally easier to set up. Here's what I had to do on a 386 in protected mode. The worst part is starting off: after you have even a minimal system you can use printf etc, but moving to protected mode on a 386 isn't fun, especially if you at first don't know the architecture very well. It's distressingly easy to reboot the system at this stage: if the 386 notices something is wrong, it shuts down and reboots - you don't even get a chance to see what's wrong. Printf() isn't very useful - a reboot also clears the screen, and anyway, you have to have access to video-mem, which might fail if your segments are incorrect etc. Don't even think about debuggers: no debugger I know of can follow a 386 into protected mode. A 386 emulator might do the job, or some heavy hardware, but that isn't usually feasible. What I used was a simple killing-loop: I put in statements like die:
jmp die
at strategic places. If it locked up, you were ok, if it rebooted, you knew at least it happened before the die-loop. Alternatively, you might use the sound io ports for some sound-clues, but as I had no experience with PC hardware, I didn't even use that. I'm not saying this is the only way: I didn't start off to write a kernel, I just wanted to explore the 386 task-switching primitives etc, and that's how I started off (in about April-91). After you have a minimal system up and can use the screen for output, it gets a bit easier, but that's when you have to enable interrupts. Bang, instant reboot, and back to the old way. All in all, it took about 2 months for me to get all the 386 things pretty well sorted out so that I no longer had to count on avoiding rebooting at once, and having the basic things set up (paging, timer-interrupt and a simple task-switcher to test out the segments etc).

2) Can you test the kernel functionality by running it as a process on a different OS?
Wouldn't the OS(the development environment) generate exceptions in cases when the kernel (of the new OS) tries to modify 'priviledged' registers? Yes, it's generally possible for some things, but eg device drivers usually have to be tested out on the bare machine. I used minix to develop linux, so I had no access to IO registers, interrupts etc. Under DOS it would have been possible to get access to all these, but then you don't have 32-bit mode. Intel isn't that great - it would probably have been much easier on a 68040 or similar. So after getting a simple task-switcher (it switched between two processes that printed AAAA... and BBBB... respectively by using the timer-interrupt - Gods I was proud over that), I still had to continue debugging basically by using printf. The first thing written was the keyboard driver: that's the reason it's still written completely in assembler (I didn't dare move to C yet - I was still debugging at about instruction-level). After that I wrote the serial drivers, and voila, I had a simple terminal program running (well, not that simple actually). It was still the same two processes (AAA..), but now they read and wrote to the console/serial lines instead. I had to reboot to get out of it all, but it was a simple kernel. After that is was plain sailing: hairy coding still, but I had some devices, and debugging was easier. I started using C at this stage, and it certainly speeds up developement. This is also when I start to get serious about my megalomaniac ideas to make "a better minix that minix". I was hoping I'd be able to recompile gcc under linux some day... The harddisk driver was more of the same: this time the problems with bad documentation started to crop up. The PC may be the most used architecture in the world right now, but that doesn't mean the docs are any better: in fact I haven't seen /any/ book even mentioning the weird 386-387 coupling in an AT etc (Thanks Bruce). After that, a small filesystem, and voila, you have a minimal unix. Two months for basic setups, but then only slightly longer until I had a disk-driver (seriously buggy, but it happened to work on my machine) and a small filesystem. That was about when I made 0.01 available (late august-91? Something like that): it wasn't pretty, it had no floppy driver, and it couldn't do much anything. I don't think anybody ever compiled that version. But by then I was hooked, and didn't want to stop until I could chuck out minix.

3) Would new linkers and loaders have to be written before you get a basic kernel running?
All versions up to about 0.11 were crosscompiled under minix386 - as were the user programs. I got bash and gcc eventually working under 0.02, and while a race-condition in the buffer-cache code prevented me from recompiling gcc with itself, I was able to tackle smaller compiles. 0.03 (October?) was able to recompile gcc under itself, and I think that's the first version that anybody else actually used. Still no floppies, but most of the basic things worked. Afetr 0.03 I decided that the next version was actually useable (it was, kind of, but boy is X under 0.96 more impressive), and I called the next version 0.10 (November?). It still had a rather serious bug in the buffer-cache handling code, but after patching that, it was pretty ok. 0.11 (December) had the first floppy driver, and was the point where I started doing linux developement under itself. Quite as well, as I trashed my minix386 partition by mistake when trying to autodial /dev/hd2. By that time others were actually using linux, and running out of memory. Especially sad was the fact that gcc wouldn't work on a 2MB machine, and although c386 was ported, it didn't do everything gcc did, and couldn't recompile the kernel. So I had to implement disk-paging: 0.12 came out in January (?) and had paging by me as well as job control by tytso (and other patches: pmacdona had started on VC's etc). It was the first release that started to have "non-essential" features, and being partly written by others. It was also the first release that actually did many things better than minix, and by now people started to really get interested. Then it was 0.95 in March, bugfixes in April, and soon 0.96. It's certainly been fun (and I trust will continue to be so) - reactions have been mostly very positive, and you do learn a lot doing this type of thing (on the other hand, your studies suffer in other respects :)
Linus

credits
www.li.org/linuxhistory.php

Document Changes:
August 1, 2001: First compiled version.

Do you need Linux?

Eventually the revolutionaries become the established culture, and then what will they do
-Linus torvalds

It is apt that I start such a topic with a quote from the man who was one of the biggest influences in the growth of the 'alternate' computing. If you are a person who keeps himself reasonably up to date with the world of the Internet, and the world of the computers in general, you would have heard that there is this thing called Linux, that is making quite a few, otherwise sane people, rather crazy.

You might also have realized that a lot of this crazy ire is directed at a company at Redmond, and at a person who is reported to have been the person who was in the garage that lead to the whole thing. What is all this? Why should you bother? Do you really need Linux?

A Revolutions Begins

Before we delve into questions such as these, a little background only helps develop a perspective. And it never hurts.

The commercial world that is prevalent in times such as now, have made a concept very important, "returns to investment". Such a concept has left in its wake a tremendous amount of muck and nonsense. Conventional wisdom says that if you spent some time doing something, then your effort must be adequately accounted for. It therefore does not make sense to give away hours of your time, which you could have spent in other ways more profitable, as a faceless, nameless entity who works so that others' world is made simpler and more productive.

That is what a number of people are now doing, as the many programmers for the present day miracle called Linux, whose users are an estimated 7 to 8 million. It all started in 1991, when Linus Torvalds a 21-year-old computer science student at the University of Helsinki, decided that his personal operating system, Minix, a Unix-look-alike operating system was not good enough. He was pretty sure he could write something better. So in a very brave and foolish attempt started to write his own. But in doing this he made am important decision, to involve people. He took the help of a number of the news groups at that time, to ask for ideas and guidance.

Then he did something unbelievable. He put up all his work on the Internet, source code et all, and asked for people to download the same and modify it to increase its functionality. What hap penned then was something that could not be predicted. The 'netlanders' decided to fall in love with this unique experiment of writing a fully free and multifunctional version of 'minix'. You will find a number of pre 1.0 release posts by Linus here. From a recognition of the fact that he needed the help of the unknown people to help him with his project, to the decision to put up the work as a tribute to all those who contributed, was the step that changed the face of things to come.

He was not alone in this. There were a number of other peoples' ideas hopes and aspirations he was carrying with him in his endeavor. But that was not what was driving him at that time. In fact Linus himself was very depreciatory of his work at that time. What started as a unique experiment of a man who was not satisfied with the way he found his environment and decided to do something about it.

With the help of hundreds of volunteers the progress of Linux continued. As the early 1.0 releases were completed and the kernel started of boasting of features of modern software. Parallelly continued the porting of the utilities of the GNU project to Linux. So now we have Linux/GNU as one of the best replacements of the standard desktop OS, Windows.

What is Linux?

Unlike commonly supposed, Linux is not the complete operating system. Somewhat like Windows not being the total system that you work on. Linux refers to the kernel that is run on the system. The kernel is the most important program in the Linux setup. It is a piece of very robust and beautifully written code that forms the interface between every other program and the hard work tat lies beneath. It is the most important component in the system, that does a number of tasks that allow the other programs to run easily without caring too much about various critical yet mundane tasks.

The Linux system on the other hand, that we normally refer to as Linux, is technically to be called Linux/GNU system. The GNU is a project that was started with a view to make the software totally free and to make it open source too. Many of the utilities that come bundled with the standard Linux installation are utilities written under the GNU project. Hence the combination of GNU and Linux is responsible for the revolution that is taking place now.

What revolution?

The trend of the PC was set by the company called Microsoft, with the release of the dos, which was a far departure from the existing scary entities running very powerful yet arrogant programs. The computer till then was limited to huge monsters, costing heaven and earth, manned by technicians, without having a concept called user friendliness. Into such an environment, coinciding with the advent of the smaller, cheaper, more powerful desktop machines, the disk operating system simplified the system so that everyone could use it. There lay the reason for the success of the modus operandi. With a focus so blinding on usability, somewhere along the line the basic reason for existence was forgotten. What started as a movement for the inclusion of the masses into computing, turned into a sorry excuse for sloppy programming, simple uniform interfaces and a general degradation of the entire computing cycle.

The world has seen in the form of the macintosh, an example of slick user interfaces. It had already seen in the Unix environment, power and stability. But under the guise of operability, it saw excuses. The Windows operating system, has maintained its look and feel since its first Windows 3.1 that came with the GUI. Its release of the Windows 95, was met with looks of astonishment by the members of the Apple tribe, who had that kind of operability almost 10 years ago. But for a sheer chance of location, Windows would not have made it to where it is now.

With its single minded fanaticism for easing things for the user, Windows has done something very peculiar. It has made the user end very easy and simple to use. It has also made basic programming so easy, that it is now possible for non-programmers to program too. Also with this heavy bias it has made the actual implementation by a professional programmer ridiculously complicated. The whole of the Windows software architecture, is so ridden with patches and workarounds that no coherent picture can be made of it.

The usability also introduces an important concept of backward compatibility. In a word this means that a windows programmer can never leave his past behind him and move on. No matter what he does he must support all that he has done, because of the remote possibility that someone might be using the software and might be inconvenienced. Add to this the sloppiness Windows code introduces. Not only do non-programmers program, they add to the headache of all those users of his code. The DLL based structure, supposed to make life easier became the trap that might just break up Windows.

This has introduced a vast intellectual gap between code developers and the users. This has necessitated such a difference in perception of the machine between the two, that the user hands over his machine to the developer, completely, when he wants to get a new program installed. With a lack of any credible system of security, it is possible for just about anyone to take control of a machine with a program written to do that. Getting a new program means running an installation program, and what it does is so transparent to the user, that he is not even aware it the program is a beneficial one or not. In fact even a discerning user might not have the means to know more. The sheer bias against involving the user in the maintenance of his own machine has led to phenomena such as the proliferation of the virii. No one knows, behind what interface lies what program that might be responsible for what damage.

The proliferation of the Windows environment has also led to a number of erroneous beliefs about the personal computer. One of them being the idea that a PC is personal, by implication no one else works on it, by implication no concept of privileges or restrictions. By a further implication, virii and trojans are a common place occurrence, one that is to be rectified rather than prevented. This hardly reflects reality. Even a truly personal system is used by more than on person at a time. And the lack of security safeguards has made it impossible for the owner to put in place any checks to safeguard his own machine.

A second one is the automatic acceptance of the irrationality of the computer. "It just happens that way", is a view a lot of Windows users keep. If restarting windows works, then that is the best method, and the only method. To work around a certain bug in the software, hit and miss policies are devised and stuck to. This gives the computer an aura of art, an aura of black magic. This makes the average user even more dependant on the system and hence on the vagaries of Windows.

And third, and a very serious idea, is the acceptance of the 'fact' that crashes and instability are an integral part of a system. Reinstalling Windows is a little trick right on the top of the charts. This has also lead to a sheer drop in the demands of performance of a system. "Too many windows cause a crash" is absurd because, being a multitasking operating system equipped with paging techniques, the Windows system was supposed to deal with such a demanding environment. Windows has been successful in shifting the onus of their programming blunders onto the users of the systems.

So why have we been accepting all this? A very valid question. There are several reasons. The first one being the fact that Windows was there first for the desktop users. Another is the tremendous job done by the Public Relations team at Redmond. What today go about under the name of Internet virii, email worms should effectively have been named Outlook worms and Windows virii. Any simple analysis would reveal that a host of these problems commonly associated with the Internet are in fact those that occur to those with Microsoft Software. Further if it were not for Microsoft's carefully worded user license agreement, which holds the company blameless for absolutely anything, they would probably have been awash in class action lawsuits by now.

This then is the revolution. The rebirth of the good old days of computing in a new avatar. The reincarnation of the power and stability of the good old Unix, on the desktop. The taming of the shrew so to speak. What was till yesterday a personal domain of the students of computer science is now being opened to the people at large. In short it is a chance for the average user to truly get the best he can afford. It is a chance for the user to be able to use the computer to its fullest capacity.

Does it effect me?

The development of the desktop for the Linux system is probably the main reason that has made this revolution a distinct possibility. Linux offers a lot, to every section of the computer user populace.

Are you a Novice user?

You will benefit from a new user interface. Linux offers all that you possibly want from the desktop. There are tools that allow you to create all the requirements of the office, that you have been used to.The Staroffice is a suite of tools that not only replicates all that you have been getting from the Microsoft Office suite, but will also help you in the process of migration by providing you with compatibility with the existing office formats, so that you can safely convert all your existing information to the new OS. There are also applications that allow you to create presentations, create and maintain spread sheets, and of course check mail. There are also a host of other applications like Personal Information Managers, that help you increase your productivity.

Are you an Inquisitive user?

You will benefit from the tremendous amount of information that is available on the topic of Linux and the related information about programming, and API calls etc. If you seriously do want to learn more about the machine that you are dealing with, Linux offers you a look beneath the hood, and lets you unleash the power of your box. It is a hands on approach to building your own personal machine. Also the stability of the system allows you to experiment. As a user you will have access to all those programs that are normally out of bounds for a Windows user. You get to install and administer Web and FTP servers, Mail servers, and a host of other applications that help you have a glimpse of the true power of Linux. For the icing of the cake, you will be privileged to look at the actual source code that runs your system.

Are you are Power user?

You will fall in love with Linux. The early Unix machines were built with networking in mind. All application were written without the assumption that its execution will be limited to a single terminal, and one user. You will have a host of application that allow web access. You will be able to run and maintain servers for Internet usage. You will be able to configure a variety of services that can be used by others, even if they are those unfortunates that run Windows. Web, mail (pop and smtp), FTP, telnet, rsh, proxy servers and others are some of the services that can be configured to run across networks. Given the status of Linux as an alternate Operating System, it comes bundled with a number of tools that allow cross platform access, including remote administration etc.

Are you are Programmer?

In spite of what Redmond tells you, you believe, as a true, hardcore programmer, that programming in C is the best way to do it. Or you may indeed have imbibed in you, the classic ways of Larry Wall. What ever be the case, Linux offers you a programming options that compare with the best in business. That old simple program, in C, that you believed was the most important thing to ever find its way through a keyboard, is suddenly useful again. Not only does Linux offer one of the richest set of API calls that an OS can offer, with the evolution of the new graphic interfaces, it also has a number of functions that allow you to program in the tremendously flexible and powerful graphical interface. Tool kits such as the gtk+ allow programmers to access the power of the GUI through native C code. And if you though you were better off without having to write code for all the mundane tasks like creation of interfaces etc, Linux comes in with tools to generate the interfaces with a click and drag approach, ala Visual Basic or Visual C++.

Can I afford it?

A tricky question really. Depends on what you mean by afford. But the answer can really be 'yes'.

If by 'afford' you mean the time required to change into a new Operating System, the answer certainly depends a lot on you as the user. There are currently a number of distributions that make the process of installation and maintenance an easy chore by providing user friendly and graphical interfaces wherever possible. Work in this field still is in progress, so even as we talk there would be newer more friendly interfaces released that probably take user friendliness to a new level.

All said and done there do exist problems to users. Linux still is under development. All that you get is the work of a number of people who did it in their spare times and have allowed you to use it for free. But it is these people who are always available to help you. There is a lot of information that is accessible regarding all the aspects of the Linux system. If you can afford some time to look up and solve your own problems, there is nothing to beat that pleasure. A little bit of information, a little bit of spare time and a little bit of creativity and the heart to experiment are all that is required for you to use Linux.

If by afford you mean the price, well what can I say. Other than the fact that whole of the Linux system is available free of charge. Most components are available under a very flexible licensing scheme called the GPL. This scheme does not prevent the commercialization of Linux components, but provides a mechanism that will protect the rights of the creators and at the same time allow you to tinker with the source code. There are a number of commercial versions also available that allow you to order a Linux distribution at a very reasonable price (Unlike the heaven and earth you have to pay for those from Redmond) and also provide you with a limited amount of after sales service too.

When do I start... now?

You do not need to be an expert to use Windows. At the same time no matter how much you use Windows, you will never be called an expert. This is because the Windows environment was not created to give you expertise of any kind. Again you do not need to be an expert to start using Linux, but once you are done with it, you will be one hell of an expert.

Some of the early pioneers

In the early 1980s, Richard Stallman founded the GNU Project, an attempt to build a free operating system based on Unix, and the Free Software Foundation, dedicated to promoting open source software. He's also the brains behind the GNU General Public License, or "copylefting."

Larry Wall wrote the first version of Perl in 1987.

Andrew Tanenbaum released Minix, upon which Linux is based, in 1987.

David Greenman served as principal architect on the FreeBSD team in 1993.

Brian Behlendorf was the chief engineer of the team that built the Apache Web server in 1995.

Document Changes
August 01, 2001: First version.
April 02, 2009: Minor updates & corrections.