Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts

January 19, 2012

Extending Tomato with Optware

I had waxed eloquent about the flexibility, freedom and capabilities extended by open source tools in general, and the Tomato USB in particular. Little did I know, that this was just the tip of the iceberg of capabilities offered by the third party firmware on my Netgear router.

The big extension to the core capabilities offered by the firmware is available via the installation of Optware. At its core, Optware is an advanced package manager, built for distribution of software packages across a number of platforms, including the TomatoUSB router firmware.

Optware comes with a variety of packages compiled and available in it's repository. This repository extends the capabilities of the router firmware, from their stripped down, small-footprint cousins to the full featured Linux box tools.

Tomato has inbuilt support for Optware. But it needed a bunch of work, to prepare the setup for Optware. In particular there were two things that had to be done:

  • Format the connected storage in EXT3. My terabyte RAID had been originally formatted in NTFS. While TomatoUSB has support for NTFS, but it is slow and painful, and fundamentally missing capabilities. Not something that lends itself for Optware.
  • Figure out where /opt is going to mounted.

There is no easy way to convert NTFS to EXT3 - other that the slow and methodical approach. Take files off the NTFS file system, format the disk as EXT3, and copy the files back. There are several tutorials out there, like this one - the only tweak was that I ended up using the mkfs.ext3 script available on the router to format the disk.

An aside, the cheap Terabyte RAID survived and is thriving through this all - including the EXT3 formatted drive.

Now mounting storage on /opt where Optware will be installed, seemed tricky at first, but ended up being pretty simple. The reason it seemed tricky was that I created only one partition on the storage when I formatted it as EXT3. My worry was that I'd have to re-size the partition and add a new one, which could then be mounted on /opt.

Turns out, you can mount the same device on multiple mount points. And given that I am already automounting the USB device, I figured all I had to do was to mount a sub-folder on /opt. Adding the following in the “Run after mounting” script-box, did the trick.

if [ -d /mnt/Teranarchy/optware ]; then mount -o bind /mnt/Teranarchy/optware /opt fi

Once I had space available on /opt - installing Optware is simple. As simple as running the following on a shell after logging in via Telnet or SSH.

wget http://tomatousb.org/local--files/tut:optware-installation/optware-install.sh -O - | tr -d '\r' > /tmp/optware-install.sh chmod +x /tmp/optware-install.sh sh /tmp/optware-install.sh

That is it. Optware does a great job of obtaining and installing all the packages. And because Tomato already has the correct folders in $PATH variables, all the tools and capabilities are available instantly from any shell.

Now that I have Optware, it is time to start doing something more interesting. Like installing a VPN on the router. Coming up next.

November 03, 2011

Evercube

Came across this pretty awesome piece of hardware - a well-built open source Network Accessible Storage (NAS) - called Evercube.

The design is open hardware, available under a Creative Commons Attribution license. And it looks good as well - a stainless steel 6-inch cube, housing five 2.5" hard drives - not the big drives but those smaller laptop ones.

Which makes it quiet enough to run in the living room.

But then it is expensive. So - maybe not the best use of the monies, but never hurts to know that there are open source options that look good.

November 22, 2004

The Philosophy of the Free & Open

And how it impacts us.

Open Source, is not about Linux. It is not about Apache [1], mySQL [2] or GNOME [3]. In fact, it is not about software at all. Rather it is a philosophy and a belief. A belief that is not only old, but is rather clichéd and goes – “The pen is mightier than the sword”.

The Internet has breathed a new life into this saying, granting it an awesome power. Power enough, that a few individuals, scattered across vast distances, armed with nothing but knowledge, are now planning world-domination.

This article is an idle walk down the annals of history and the corridors of philosophy [4], to play with the questions “who” and “why”. Who are these people, and why-o-why are they doing what they are doing.

Free as in Freedom

The words “free software” or “open source” typically evoke responses saying, “It is available without charge”. While true, it is a quirk with the English language [5] that prevents us from seeing the other, truer meaning. English uses the same word “free” to denote both “without a price to pay” and “freedom to do what you wish to do”. It is the second meaning that truly symbolizes all that this movement stands for. The interpretation of this freedom makes one appreciate that this philosophy is not restricted to software at all. It in fact extends a lot wider.

Imagine a bunch of kids given a huge white canvas, spotlessly clean, and spray cans of red paint. More often than not, the kids will spray away, randomly on the canvas. What if, instead, the kids sat down and started to painstakingly detail the verses of the Iliad or the Ramayana. This is seemingly inconceivable, because of the apparent human nature of preferring the playful to the ordered, which is amplified to an extreme in a group. Directing a random group without either a stick or a carrot seems impossible.

However this impossibility is precisely what is manifesting over at Wikipedia [6]. Wikipedia is an “Open Encyclopedia” where anyone can contribute to any article, without even being logged in. Furthermore, any change perpetrated is visible instantly on the web, without being checked, corrected or in any other fashion moderated by anyone. Given this absolute freedom you would assume chaos – errors in content, clumsiness, information biases, ineptitude or plain vanilla vandalism. However the Wikipedia is one of the web’s most searched encyclopedias, channeling the expertise of thousands to millions more.

Slashdot [7] is another example of this channeled freedom. Despite its obvious biases and pedigree, it remains by far the best example of a publicly moderated discussion board.

The philosophy that drives a hacker of Linux is the same that drives a contributor in Wikipedia. Freedom is not always a bad thing. It does not always result in chaos but begets responsibility and motivates productivity. This freedom is a core tenet of the philosophy of the Open Source movement. I could go on with other examples of the newsgroups [8] or the open courseware [9], but that would be unnecessary. Instead lets spend time tracing the roots of the free and open source philosophy.

In the beginning was the command line

With apologies to Neil Stephenson [10], we are talking about a time that was not too long ago. About three decades ago, the computer meant the PDP-10 [11] or a teletype-fed mainframe. Programming was about thinking in bits and bytes, while programs were meant to be shared, understood, debated upon and improved. Out of thin air and using 0s and 1s, a new science was being invented. C and C++ [12] were being developed, Unix was being coded [13] and software and hardware standards were being set.

The times were reminiscent of the Wild West; with its own tight knit groups, raw excitement and brave gun-wielding heroes. The difference was that the program now replaced the gun and the mainframe was the battlefield. It was this arena that the corporation was presently entering. With a promise to take computing to the masses companies were doing something that was unacceptable to the pioneers – “selling” software.

Richard Stallman [14] was an early pioneer. He believed that software was a universal tool and the source was its soul. Closing source or selling software was something that was utterly unacceptable to him. And he was prepared to do something about it. In 1984, the same year Apple Computer released the Macintosh, Stallman set up the GNU foundation [15].

GNU stands for GNU’s Not Unix, whose vision, satirically, was to provide a full, free version of UNIX. In 1984, UNIX was the predominant OS and was available in a mind-boggling variety of commercial flavors each fragmented from and incompatible with another. The Personal Computer as a product was almost non-existent then and as a concept was still a joke. GNU therefore sought to “liberate” the entire computing world by providing the fundamental tool – the Unix OS – for free.

UNIX-like operating systems are built of two basic parts – the kernel and the utilities. The kernel is the core, which handles the very low level interactions with the hardware, memory and the processor. It only provides a very basic functionality that is converted into something useful by the utilities. UNIX, by its rich heritage has a multitude of tools for every activity from network management to text processing.

While some members of the GNU started recreating the rich toolset, others started work on the kernel, called the HURD [16]. In time the tools started rolling out, each free, available with the source, providing functionality similar to or better than those provided by the various commercial Unices. The development of the kernel was however heading nowhere. The late 1980’s saw the advent [17] of the true Personal Computer – cheap Intel hardware running DOS or the early Windows.

Without the kernel, and a rapidly dying breed of mainframes unable to survive the onslaught of the PC, the GNU movement suddenly faced irrelevance.

In 1991, Linus Torvalds, a 21-year-old computer science student at the University of Helsinki, decided that his personal operating system, Minix, a Unix-look-alike was not good enough. He was pretty sure he could write something better and attempted to code his own. But in doing this he turned to the Internet for help and guidance [18]. He also put the source code of his attempts back on the net for comments and correction. And from this sprang the kernel, which we now know as Linux. Linux as a kernel could run on the same contemporary hardware used by DOS and Windows. Further, being based on the same standard as that of the older UNIX, Linux could run programs written for the older UNIX kernels.

For GNU, this meant that their long wait for a free kernel was finally over. For Linux this meant that it finally had programs that could actually utilize the kernel that was being built. GNU/Linux became the complete ‘free’ operating system that Richard Stallman and a number of others had been dreaming of.

On the shoulders of Giants

It is people who ultimately define the success of any idea. So it is with the idea of the “open”. Among the multitude of programmers, users, fans and followers of the free and open source movements, there are some who have helped define the soul of the FOSS movement. There are some like Richard Stallman, who are fanatically devoted to the idea of free software, while others like Linus Torvalds, have been the silent, media-shy icons of the movement. There however are others who have helped give a more balanced view of the philosophy of FOSS.

Eric S. Raymond is a Linux evangelist and the author of three extremely powerful essays [19] on the philosophy of Free and Open Source. Called “The Cathedral and the Bazaar”, “Homesteading the Noosphere” and “The Magic Cauldron”, these essays present a very logical account of the FOSS philosophy. These essays discuss the social, economic and personal drives, reasons and justifications for the success of the open approach. Bruce Perens is another Linux advocate whose article “The Open Source Definition” [20] is a fundamental account of the principle of the FOSS camp. These essays explore the novel effect of having a loosely bound; part time volunteers drive projects of unimaginable magnitude and give it all away for free.

One notable side effect of the having such a diverse and widespread fan base is that villains are instantly vilified and secrets don’t remain secret for long. Take the example of the famous “Halloween Documents” [21].

Microsoft, during Halloween 1998, commissioned an internal strategy memorandum on its responses to the Linux/Open Source Phenomenon. Unfortunately, it leaked, and within days was all over the Internet being taken apart by numerous FOSS advocates. Microsoft was always acknowledged to be the directly affected party because of the FOSS, but it was till then more of a cold war. The Halloween documents changed all that. Open Source advocates openly condemned Microsoft. Microsoft slowly started realizing that FOSS was rapidly changing from being a fringe movement to something that directly threatened it. It responded by sowing, what is now known as, FUD (Fear, Uncertainty and Doubt) in the minds of its customers. For the first time Microsoft directly acknowledged [22] that Linux had the capacity to unseat it, and started attacking the fundamental value propositions [23] of Linux and the FOSS.

It is also about this time that the mainstream press started increasing its coverage of the FOSS. The coverage was initially about Linux, the free replacement of Unix. Then it was about the sustainability of the Open Source as a Business model. And lately it is about David Vs Goliath – FOSS Vs Microsoft.

The press is an expression of popular opinion. Complementally the press forms popular opinion. And the popular opinion, therefore, weighs heavily on portraying FOSS as the David in the David Vs Goliath story.

This is where we come in

As long as we restrict our view of the FOSS movement to the software it generates, this popular opinion would seem perfectly reasonable. However if we realize that the philosophy of FOSS extends beyond the mere products of the FOSS movement, we begin to realize the nature of our relationship with it. Without too great a risk of generalization, the true nature of the spirit and philosophy of the FOSS is nothing short of the Internet itself.

The philosophy if FOSS is about freedom, freedom defined as “libre” – lack of constraints. It is a spirit of sharing and collaboration. It is a spirit that places quality above other considerations. It is a spirits that drives and is driven by a free flow of ideas. It is a philosophy that considers information supreme.

Every time we search the Internet for tips we are appealing to the philosophy of Open Source. Every code snippet, article, comparative analysis, forum on the Internet is driven by this philosophy. Every self-taught computer user is a product of the philosophy of the Open Source.

To consider this movement and the change it entails as anything less than mammoth would be childish. It involves a fundamental shift in our perception of the business of Information Technology itself. However, the change is upon us. It is now up to us to either respond proactively or to passively let events take the lead in changing us.

References

[1] http://www.apache.org/
[2] http://www.mysql.com/
[3] http://www.gnome.org/
[4] http://www.gnu.org/philosophy/philosophy.html
[5] http://www.gnu.org/philosophy/categories.html#FreeSoftware
[6] http://www.wikipedia.org/
[7] http://slashdot.org/
[8] http://groups.google.com/
[9] http://ocw.mit.edu/index.html
[10] http://www.cryptonomicon.com/beginning.html
[11] http://en.wikipedia.org/wiki/PDP-10
[12] http://www.research.att.com/~bs/C++.html
[13] http://www.bell-labs.com/history/unix/
[14] http://www.stallman.org/
[15] http://www.gnu.org/
[16] http://www.gnu.org/software/hurd/hurd.html
[17] http://www.geocities.com/n_ravikiran/write008.htm
[18] http://www.geocities.com/n_ravikiran/write003a.htm
[19] http://www.catb.org/~esr/writings/cathedral-bazaar/
[20] http://perens.com/Articles/OSD.html
[21] http://www.opensource.org/halloween/
[22] http://news.com.com/2100-1001_3-253320.html
[23] http://www.microsoft.com/mscorp/facts/default.asp

Document Changes
November, 22, 2004: First published version.

August 24, 2001

Analysis and Design of Reinforced Concrete Chimneys

Following is the entire contents of my B.Tech Project submitted, in Civil Engineering.

The paper deals with the various loads associated with the analysis of Reinforced Concrete Chimneys . It deals with the methodologies involved in estimating loads. It finally deals with the methods of design to account for the effects of these loads.

Abstract:

The present thesis deals with the analysis and design aspects of Reinforced Concrete Chimneys. The thesis reviews the various load effects that are incident upon tall free standing structures such as a chimney and the methods for estimation of the same using various codal provisions. Various loads are incident upon a chimney such as, wind loads, seismic loads and temperature loads etc. The codal provisions for the evaluation of the same have been studied and applied. Comparison has also been done between the values obtained of these load effects using the procedures outlined by various codes.

The design strength of the chimney cross sections has also been estimated. Design charts have also been prepared that can be used to ease the process of the design of the chimney cross sections and the usage exemplified.

A typical chimney of 250m has been analyzed and designed using the processes already outlined. Drawings have been prepared for the chimney. The foundation for the chimney too has been designed.

List of PDF files

Document Changes
August 24, 2001: First published.

August 01, 2001

Do you need Linux? addendum

The following are some of the text, nostalgic, take from the Internet that show a history of Linux. Please do check the credits in the end.

Note: The following text was written by Linus on July 31 1992. It is a collection of various artifacts from the period in which Linux first began to take shape.


This is just a sentimental journey into some of the first posts concerning linux, so you can happily press Control-D now if you actually thought you'd get anything technical.

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix Subject: Gcc-1.40 and a posix-question
Message-ID: <1991Jul3.100050.9886@klaava.Helsinki.FI>
Date: 3 Jul 91 10:00:50 GMT
Hello netlanders,
Due to a project I'm working on (in minix), I'm interested in the posix standard definition. Could somebody please point me to a (preferably) machine-readable format of the latest posix rules? Ftp-sites would be nice.


The project was obviously linux, so by July 3rd I had started to think about actual user-level things: some of the device drivers were ready, and the harddisk actually worked. Not too much else. Just a success-report on porting gcc-1.40 to minix using the 1.37 version made by Alan W Black & co.

Linus Torvalds torvalds@kruuna.helsinki.fi
PS. Could someone please try to finger me from overseas, as I've installed a "changing .plan" (made by your's truly), and I'm not certain it works from outside? It should report a new .plan every time.


Then, almost two months later, I actually had something working: I made sources for version 0.01 available on nic sometimes around this time. 0.01 sources weren't actually runnable: they were just a token gesture to arl who had probably started to despair about ever getting anything. This next post must have been from just a couple of weeks before that release.

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: What would you like to see most in minix?
Summary: small poll for my new operating system Message-ID:<1991Aug25.205708.9541@klaava.Helsinki.FI>
Date: 25 Aug 91 20:57:08 GMT
Organization: University of Helsinki
Hello everybody out there using minix - I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things). I've currently ported bash(1.08) and gcc(1.40), and things seem to work. This implies that I'll get something practical within a few months, and I'd like to know what features most people would want. Any suggestions are welcome, but I won't promise I'll implement them :-)

Linus (torvalds@kruuna.helsinki.fi)
PS. Yes - it's free of any minix code, and it has a multi-threaded fs. It is NOT protable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that's all I have :-(.


Judging from the post, 0.01 wasn't actually out yet, but it's close. I'd guess the first version went out in the middle of September -91. I got some responses to this (most by mail, which I haven't saved), and I even got a few mails asking to be beta-testers for linux. After that just a few general answers to quesions on the net:

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: Re: What would you like to see most in minix?
Summary: yes - it's nonportable
Message-ID: <1991Aug26.110602.19446@klaava.Helsinki.FI>
Date: 26 Aug 91 11:06:02 GMT Organization: University of Helsinki
In article <1991Aug25.234450.22562@nntp.hut.fi> jkp@cs.HUT.FI(Jyrki Kuoppala) writes:
>> [re: my post about my new OS]
> >Tell us more! Does it need a MMU?
Yes, it needs a MMU (sorry everybody), and it specifically
needs a 386/486 MMU (see later).
> >>PS. Yes - it's free of any minix code, and it has a multi-threaded fs.
>>>It is NOT protable (uses 386 task switching etc)
> >How much of it is in C? What difficulties will there be in porting?
>Nobody will believe you about non-portability ;-), and I for one would
>like to port it to my Amiga (Mach needs a MMU and Minix is not free).
Simply, I'd say that porting is impossible. It's mostly in C, but most people wouldn't call what I write C. It uses every conceivable feature of the 386 I could find, as it was also a project to teach me about the 386. As already mentioned, it uses a MMU, for both paging (not to disk yet) and segmentation. It's the segmentation that makes it REALLY 386 dependent (every task has a 64Mb segment for code & data - max 64 tasks in 4Gb. Anybody who needs more than 64Mb/task - tough cookies). It also uses every feature of gcc I could find, specifically the __asm__ directive, so that I wouldn't need so much assembly language objects. Some of my "C"-files (specifically mm.c) are almost as much assembler as C. It would be "interesting" even to port it to another compiler (though why anybody would want to use anything other than gcc is a mystery). Unlike minix, I also happen to LIKE interrupts, so interrupts are handled without trying to hide the reason behind them (I especially like my hard-disk-driver. Anybody else make interrupts drive a state- machine?). All in all it's a porters nightmare.
>As for the features; well, pseudo ttys, BSD sockets, user-mode
>filesystems (so I can say cat /dev/tcp/kruuna.helsinki.fi/finger),
>window size in the tty structure, system calls capable of supporting
>POSIX.1. Oh, and bsd-style long file names.
Most of these seem possible (the tty structure already has stubs for window size), except maybe for the user-mode filesystems. As to POSIX, I'd be delighted to have it, but posix wants money for their papers, so that's not currently an option. In any case these are things that won't be supported for some time yet (first I'll make it a simple minix- lookalike, keyword SIMPLE).

Linus (torvalds@kruuna.helsinki.fi)
PS. To make things really clear - yes I can run gcc on it,and bash, and most of the gnu [bin/file]utilities, but it's not very debugged, and the library is really minimal. It doesn't even support floppy-disks yet. It won't be ready for distribution for a couple of months. Even then it probably won't be able to do much more than minix, and much less in some respects. It will be free though (probably under gnu-license or similar).


Well, obviously something worked on my machine: I doubt I had yet gotten gcc to compile itself under linux (or I would have been too proud of it not to mention it). Still before any release-date.
Then, October 5th, I seem to have released 0.02. As I already mentioned, 0.01 didn't actually come with any binaries: it was just source code for people interested in what linux looked like. Note the lack of announcement for 0.01: I wasn't too proud of it, so I think I only sent a note to everybody who had shown interest.

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: Free minix-like kernel sources for 386-AT
Message-ID: <1991Oct5.054106.4647@klaava.Helsinki.FI>
Date: 5 Oct 91 05:41:06 GMT
Organization: University of Helsinki
Do you pine for the nice days of minix-1.1, when men were men and wrote their own device drivers? Are you without a nice project and just dying to cut your teeth on a OS you can try to modify for your needs? Are you finding it frustrating when everything works on minix? No more all- nighters to get a nifty program working? Then this post might be just for you :-) As I mentioned a month(?) ago, I'm working on a free version of a minix-lookalike for AT-386 computers. It has finally reached the stage where it's even usable (though may not be depending on what you want), and I am willing to put out the sources for wider distribution. It is just version 0.02 (+1 (very small) patch already), but I've successfully run bash/gcc/gnu-make/gnu-sed/compress etc under it.
Sources for this pet project of mine can be found at nic.funet.fi (128.214.6.100) in the directory /pub/OS/Linux. The directory also contains some README-file and a couple of binaries to work under linux (bash, update and gcc, what more can you ask for :-). Full kernel source is provided, as no minix code has been used. Library sources are only partially free, so that cannot be distributed currently. The system is able to compile "as-is" and has been known to work. Heh. Sources to the binaries (bash and gcc) can be found at the same place in /pub/gnu. ALERT! WARNING! NOTE! These sources still need minix-386 to be compiled (and gcc-1.40, possibly 1.37.1, haven't tested), and you need minix to set it up if you want to run it, so it is not yet a standalone system for those of you without minix. I'm working on it. You also need to be something of a hacker to set it up (?), so for those hoping for an alternative to minix-386, please ignore me. It is currently meant for hackers interested in operating systems and 386's with access to minix. The system needs an AT-compatible harddisk (IDE is fine) and EGA/VGA. If you are still interested, please ftp the README/RELNOTES, and/or mail me for additional info. I can (well, almost) hear you asking yourselves "why?". Hurd will be out in a year (or two, or next month, who knows), and I've already got minix. This is a program for hackers by a hacker. I've enjouyed doing it, and somebody might enjoy looking at it and even modifying it for their own needs. It is still small enough to understand, use and modify, and I'm looking forward to any comments you might have. I'm also interested in hearing from anybody who has written any of the utilities/library functions for minix. If your efforts are freely distributable (under copyright or even public domain), I'd like to hear from you, so I can add them to the system. I'm using Earl Chews estdio right now (thanks for a nice and working system Earl), and similar works will be very wellcome. Your (C)'s will of course be left intact. Drop me a line if you are willing to let me use your code.
Linus
PS. to PHIL NELSON! I'm unable to get through to you, and keep getting "forward error - strawberry unknown domain" or something.


Well, it doesn't sound like much of a system, does it? It did work, and some people even tried it out. There were several bad bugs (and there was no floppy-driver, no VM, no nothing), and 0.02 wasn't really very useable.
0.03 got released shortly thereafter (max 2-3 weeks was the time between releases even back then), and 0.03 was pretty useable. The next version was numbered 0.10, as things actually started to work pretty well. The next post gives some idea of what had happened in two months more...

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix Subject:
Re: Status of LINUX?
Summary: Still in beta
Message-ID: <1991Dec19.233545.8114@klaava.Helsinki.FI>
Date: 19 Dec 91 23:35:45 GMT
Organization: University of Helsinki
In article <469@htsa.htsa.aha.nl> miquels@maestro.htsa.aha.nl
(Miquel van Smoorenburg) writes:
>Hello *,
> I know some people are working on a FREE O/S for the 386/486,
>under the name Linux. I checked nic.funet.fi now and then, to see what was >happening. However, for the time being I am without FTP access so I don't
>know what is going on at the moment. Could someone please inform me about it?
>It's maybe best to follow up to this article, as I think that there are
>a lot of potential interested people reading this group. Note, that I don't
>really *have* a >= 386, but I'm sure in time I will.
Linux is still in beta (although available for brave souls by ftp), and has reached the version 0.11. It's still not as comprehensive as 386-minix, but better in some respects. The "Linux info-sheet" should be posted here some day by the person that keeps that up to date. In the meantime, I'll give some small pointers. First the bad news: - Still no SCSI: people are working on that, but no date yet. Thus you need a AT-interface disk (I have one report that it works on an EISA 486 with a SCSI disk that emulates the AT-interface, but that's more of a fluke than anything else: ISA+AT-disk is currently the hardware setup)


As you can see, 0.11 had already a small following. It wasn't much, but it did work.

- still no init/login: you get into bash as root upon bootup.


That was still standard in the next release.

- although I have a somewhat working VM (paging to disk), it's not ready yet. Thus linux needs at least 4M to be able to run the GNU binaries (especially gcc). It boots up in 2M, but you cannot compile.


I actually released a 0.11+VM version just before Christmas -91: I didn't need it myself, but people were trying to compile the kernel in 2MB and failing, so I had to implement it. The 0.11+VM version was available only to a small number of people that wanted to test it out: I'm still surprised it worked as well as it did.

- minix still has a lot more users: better support.
- it hasn't got years of testing by thousands of people, so there are probably quite a few bugs yet. Then for the good things..
- It's free (copyright by me, but freely distributable under a very lenient copyright)


The early copyright was in fact much more restrictive than the GNU copyleft: I didn't allow any money at all to change hands due to linux. That changed with 0.12.

- it's fun to hack on.
- /real/ multithreading filesystem.
- uses the 386-features. Thus locked into the 386/486 family, but it makes things clearer when you don't have to cater to other chips.
- a lot more... read my .plan. /I/ think it's better than minix, but I'm a bit prejudiced. It will never be the kind of professional OS that Hurd will be (in the next century or so :), but it's a nice learning tool (even more so than minix, IMHO), and it was/is fun working on it.
Linus
(torvalds@kruuna.helsinki.fi)
---- my .plan --------------------------
Free UNIX for the 386 - coming 4QR 91 or 1QR 92. The current version of linux is 0.11 - it has most things a unix kernel needs, and will probably be released as 1.0 as soon as it gets a little more testing, and we can get a init/login going. Currently you get dumped into a shell as root upon bootup. Linux can be gotten by anonymous ftp from 'nic.funet.fi' (128.214.6.100) in the directory '/pub/OS/Linux'. The same directory also contains some binary files to run under Linux. Currently gcc, bash, update, uemacs, tar, make and fileutils. Several people have gotten a running system, but it's still a hackers kernel. Linux still requires a AT-compatible disk to be useful: people are working on a SCSI-driver, but I don't know when it will be ready. There are now a couple of other sites containing linux, as people have had difficulties with connecting to nic. The sites are:
Tupac-Amaru.Informatik.RWTH-Aachen.DE (137.226.112.31): directory /pub/msdos/replace
tsx-11.mit.edu (18.172.1.2): directory /pub/linux
There is also a mailing list set up 'Linux-activists@niksula.hut.fi'. To join, mail a request to 'Linux-activists-request@niksula.hut.fi'.
It's no use mailing me: I have no actual contact with the mailing-list (other than being on it, naturally).
Mail me for more info:
Linus
(torvalds@kruuna.Helsinki.FI)
0.11 has these new things:
- demand loading
- code/data sharing between unrelated processes
- much better floppy drivers (they actually work mostly)
- bug-corrections
- support for Hercules/MDA/CGA/EGA/VGA
- the console also beeps (WoW! Wonder-kernel :-)
- mkfs/fsck/fdisk
- US/German/French/Finnish keyboards
- settable line-speeds for com1/2


As you can see: 0.11 was actually stand-alone: I wrote the first mkfs/fsck/fdisk programs for it, so that you didn't need minix any more to set it up. Also, serial lines had been hard-coded to 2400bps, as that was all I had.

Still lacking:
- init/login
- rename system call
- named pipes
- symbolic links

Well, they are all there now: init/login didn't quite make it to 0.12, and rename() was implemented as a patch somewhere between 0.12 and 0.95. Symlinks were in 0.95, but named pipes didn't make it until 0.96.

0.12 will probably be out in January (15th or so), and will have:
- POSIX job control (by tytso)
- VM (paging to disk)
- Minor corrections


Actually, 0.12 was out January 5th, and contained major corrections. It was in fact a very stable kernel: it worked on a lot of new hardware, and there was no need for patches for a long time. 0.12 was also the kernel that "made it": that's when linux started to spread a lot faster. Earlier kernel releases were very much only for hackers: 0.12 actually worked quite well.


Note: The following document is a reply by Linus Torvalds, creator of Linux, in which he talks about his experiences in the early stages of Linux development

To: Linux-Activists@BLOOM-PICAYUNE.MIT.EDU
From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Subject: Re: Writing an OS - questions !!
Date: 5 May 92 07:58:17 GMT
In article <10685@inews.intel.com> nani@td2cad.intel.com (V.Narayanan) writes:
Hi folks,
For quite some time this "novice" has been wondering as to how one goes about the task of writing an OS from "scratch". So here are some questions, and I would appreciate if you could take time to answer 'em.
Well, I see someone else already answered, but I thought I'd take on the linux-specific parts. Just my personal experiences, and I don't know how normal those are.

1) How would you typically debug the kernel during the development phase?
Depends on both the machine and how far you have gotten on the kernel: on more simple systems it's generally easier to set up. Here's what I had to do on a 386 in protected mode. The worst part is starting off: after you have even a minimal system you can use printf etc, but moving to protected mode on a 386 isn't fun, especially if you at first don't know the architecture very well. It's distressingly easy to reboot the system at this stage: if the 386 notices something is wrong, it shuts down and reboots - you don't even get a chance to see what's wrong. Printf() isn't very useful - a reboot also clears the screen, and anyway, you have to have access to video-mem, which might fail if your segments are incorrect etc. Don't even think about debuggers: no debugger I know of can follow a 386 into protected mode. A 386 emulator might do the job, or some heavy hardware, but that isn't usually feasible. What I used was a simple killing-loop: I put in statements like die:
jmp die
at strategic places. If it locked up, you were ok, if it rebooted, you knew at least it happened before the die-loop. Alternatively, you might use the sound io ports for some sound-clues, but as I had no experience with PC hardware, I didn't even use that. I'm not saying this is the only way: I didn't start off to write a kernel, I just wanted to explore the 386 task-switching primitives etc, and that's how I started off (in about April-91). After you have a minimal system up and can use the screen for output, it gets a bit easier, but that's when you have to enable interrupts. Bang, instant reboot, and back to the old way. All in all, it took about 2 months for me to get all the 386 things pretty well sorted out so that I no longer had to count on avoiding rebooting at once, and having the basic things set up (paging, timer-interrupt and a simple task-switcher to test out the segments etc).

2) Can you test the kernel functionality by running it as a process on a different OS?
Wouldn't the OS(the development environment) generate exceptions in cases when the kernel (of the new OS) tries to modify 'priviledged' registers? Yes, it's generally possible for some things, but eg device drivers usually have to be tested out on the bare machine. I used minix to develop linux, so I had no access to IO registers, interrupts etc. Under DOS it would have been possible to get access to all these, but then you don't have 32-bit mode. Intel isn't that great - it would probably have been much easier on a 68040 or similar. So after getting a simple task-switcher (it switched between two processes that printed AAAA... and BBBB... respectively by using the timer-interrupt - Gods I was proud over that), I still had to continue debugging basically by using printf. The first thing written was the keyboard driver: that's the reason it's still written completely in assembler (I didn't dare move to C yet - I was still debugging at about instruction-level). After that I wrote the serial drivers, and voila, I had a simple terminal program running (well, not that simple actually). It was still the same two processes (AAA..), but now they read and wrote to the console/serial lines instead. I had to reboot to get out of it all, but it was a simple kernel. After that is was plain sailing: hairy coding still, but I had some devices, and debugging was easier. I started using C at this stage, and it certainly speeds up developement. This is also when I start to get serious about my megalomaniac ideas to make "a better minix that minix". I was hoping I'd be able to recompile gcc under linux some day... The harddisk driver was more of the same: this time the problems with bad documentation started to crop up. The PC may be the most used architecture in the world right now, but that doesn't mean the docs are any better: in fact I haven't seen /any/ book even mentioning the weird 386-387 coupling in an AT etc (Thanks Bruce). After that, a small filesystem, and voila, you have a minimal unix. Two months for basic setups, but then only slightly longer until I had a disk-driver (seriously buggy, but it happened to work on my machine) and a small filesystem. That was about when I made 0.01 available (late august-91? Something like that): it wasn't pretty, it had no floppy driver, and it couldn't do much anything. I don't think anybody ever compiled that version. But by then I was hooked, and didn't want to stop until I could chuck out minix.

3) Would new linkers and loaders have to be written before you get a basic kernel running?
All versions up to about 0.11 were crosscompiled under minix386 - as were the user programs. I got bash and gcc eventually working under 0.02, and while a race-condition in the buffer-cache code prevented me from recompiling gcc with itself, I was able to tackle smaller compiles. 0.03 (October?) was able to recompile gcc under itself, and I think that's the first version that anybody else actually used. Still no floppies, but most of the basic things worked. Afetr 0.03 I decided that the next version was actually useable (it was, kind of, but boy is X under 0.96 more impressive), and I called the next version 0.10 (November?). It still had a rather serious bug in the buffer-cache handling code, but after patching that, it was pretty ok. 0.11 (December) had the first floppy driver, and was the point where I started doing linux developement under itself. Quite as well, as I trashed my minix386 partition by mistake when trying to autodial /dev/hd2. By that time others were actually using linux, and running out of memory. Especially sad was the fact that gcc wouldn't work on a 2MB machine, and although c386 was ported, it didn't do everything gcc did, and couldn't recompile the kernel. So I had to implement disk-paging: 0.12 came out in January (?) and had paging by me as well as job control by tytso (and other patches: pmacdona had started on VC's etc). It was the first release that started to have "non-essential" features, and being partly written by others. It was also the first release that actually did many things better than minix, and by now people started to really get interested. Then it was 0.95 in March, bugfixes in April, and soon 0.96. It's certainly been fun (and I trust will continue to be so) - reactions have been mostly very positive, and you do learn a lot doing this type of thing (on the other hand, your studies suffer in other respects :)
Linus

credits
www.li.org/linuxhistory.php

Document Changes:
August 1, 2001: First compiled version.