QODS ec

Saturday, June 05, 2004

VIRUS: Korgo Worm Spreads to More Computers Systems

Enterprise Security Today (Online Security): NewsFactor Network - - Korgo Worm Spreads to More Computers Systems

VIRUS: NetSky still dominates virus hit parade

Globetechnology

NetSky still dominates virus hit parade

By JACK KAPICA
Globe and Mail Update

POSTED AT 9:53 AM EDT Friday, Jun 4, 2004

Its accused maker may be awaiting trial in Germany, but the NetSky virus is still dominating the virus hit parade in May.

Variants of NetSky occupy seven — or perhaps eight — of the "Dirty Dozen" top viruses compiled monthly by anti-virus experts at the Ohio-based computer security company Central Command.

The No. 1 virus on the list is Sasser, which accounted for almost half of all infections reported to Central Command. But the 18-year-old German accused of creating the NetSky virus has reportedly confessed to making Sasser too.

Already, NetSky's dominance is being threatened by a new worm, called Korgo. This week, security experts at Symantec upgraded Korgo (officially named W32.Korgo.F) from a Level 2 to a Level 3 threat after an increased number of submissions.

The highest threat rating given by Symantec is Level 4.

The Korgo worm attempts to propagate by exploiting a Microsoft Windows vulnerability publicly announced on April 13 called the LSASS Buffer Overrun Vulnerability. A blended threat — meaning it does several different tasks — Korgo affects computers running Windows 2000 and Windows XP operating systems, listening in on TCP ports 113 and 3067, potentially opening back doors on those ports.

In descending order of effectiveness, Central Command's Dirty Dozen are: Sasser (49.1 per cent); NetSky.P (16.1 per cent); NetSky.Z 5.4 per cent; NetSky.D.Dam (5.1 per cent); NetSky.Q (3.7 per cent); NetSky.C (2.8 per cent); NetSky.B (2 per cent); Bagle.AA (1.5 per cent); NetSky.A (1.1 per cent); Bagle.Z (0.7 per cent); MyDoom.F (0.4 per cent) and Sober.G (0.3 per cent).

Others accounted for 11.8 per cent of reported infections.

Major antivirus companies have published updates to their dictionary of known infectors on all these viruses, and urge users to download the new definitions as soon as possible.

"W32.Korgo.F includes backdoor functionality that could leave systems open to unauthorized access," said Symantec Security Response senior director Alfred Huger.

"This back door functionality could result in a loss of confidential data and may also compromise security settings. This threat is another strong example of why it is critical for computer users to be diligent in applying security patches, keeping virus definitions updated, and following best practices," he said.

M$: Microsoft's Charm Offensive: From 'Big Brother' to 'Big Buddy'?

Microsoft's Charm Offensive: From 'Big Brother' to 'Big Buddy'?

M$: PC Magazine's First Looks Preview: Windows Media Player Version 10

PC Magazine's First Looks Preview: Windows Media Player Version 10

OT: Opera 7.5 Is a Worthy Browser Upgrade

Opera 7.5 Is a Worthy Browser Upgrade

LINUX: SCO and BayStar Bury the Hatchet

SCO and BayStar Bury the Hatchet

M$: Microsoft Patents PDA Clicks

Microsoft Patents PDA Clicks

M$: United States Patent: 6,727,830

United States Patent: 6,727,830

LINK: Minimalism versus pragmatism versus idealism

Security Forums Dot Com :: View topic - Minimalism versus pragmatism versus idealism

LINK: MonoDevelop Homepage

MonoDevelop Homepage

OT: VMWare Workstation 4.5 Review

HDP Tools - VMWare Workstation 4.5 Review: " by Hern�n Di Pietro (5 Jun, 2004)Being a true fan of virtualization software, I'm ready now to test out the new VMWare Workstation 4.5 after my comparison between Virtual PC 2004 an"

by Hernán Di Pietro (5 Jun, 2004)

Being a true fan of virtualization software, I'm ready now to test out the new VMWare Workstation 4.5 after my comparison between Virtual PC 2004 and VMWare 4.0. Many users of VMWare and Virtual PC surely asked themselves if this new VMWare version it's an evolution or a revolution. Since a transition from v4.0 to v4.5 may not appear as shocking to the consumer as a "shining" 5.0 product version release, we'll see what the new VMWare release has to offer to all of us, performance-stability-hungry developers and power users.

According to the official information, VMWare 4.5 offers the following improvements:

* Up to 4GB memory for all running virtual machines and up to 3.600MB for every single VM.
* Experimental support for Longhorn (you can install and run beta versions of the next Microsoft OS)
* Support for PXE
* Improved support for Linux 2.6 kernels-based guests
* Easier USB Installation (you must not be an administrator to connect USB devices to the VMs)
* Automatic check for product updates
* Added support for SUSE 9.0, Netware 5.1, 6.0 and 6.5
* Tip of the day messages


Read more ...

VIRUS: Mutant son of MyDoom plans three-pronged attack

Mutant son of MyDoom plans three-pronged attack | The Register

By John Leyden (john.leyden@theregister.co.uk)
Published Thursday 3rd June 2004 14:27 GMT

Virus writers have used code from the infamous Mydoom worm to create a potentially dangerous new Internet worm which uses multiple methods to spread.

Plexus-A (http://www.viruslist.com/eng/viruslist.html?id=1618235) spreads using three different methods: infected email attachments, file-sharing networks and Windows vulnerabilities (the LSASS vulnerability used by Sasser and the RPC DCOM flaw used by Blaster). The as yet unknown virus authors used MyDoom source code as the basis for creating Plexus, according to an analysis (http://www.viruslist.com/eng/viruslist.html?id=1618235) of the worm by Russian AV firm Kaspersky Labs.

David Emm, senior technology consultant at Kaspersky Labs, said that the multiple spreading methods is helping Plexus to infect more machines. No worm since Nimda has used as many methods to spread, according to Emm. Kaspersky rates Plexus as a moderate risk. It is spreading - but nothing like as fast as Sasser or Blaster - and the main concern about the worm stems from the fact it creates a backdoor for hackers on infected machines. These compromised machines could be used for spam runs or as a platform for DDoS attacks. However the motives of the virus authors behind the worm remain unclear.

Plexus-A chooses from five different email message headers in an attempt to bamboozle users. Each message has a different header, body and attachment name. The only characteristic which does not change is the file size: 16208 bytes when compressed with FSG and 57856 when uncompressed. Mac and Linux users are - as usual - immune but Plexus is a menace for Windows users.

Upon execution Plexus-A copies itself to the Windows system registry under the name upu.exe, which runs every time a machine is rebooted after infection. Plexus sends copies of itself to email addresses harvested from the hard drives of infected machines.

The worm is among the first to specifically target users of Kaspersky Labs' AV software. Plexus' payload includes attempts to prevent downloads of Kaspersky Anti-Virus database updates. Plexus also scans the Net for systems vulnerable to the flaws it exploits. The worm opens a backdoor onto infected machines on port 1250, making it possible for files to be remotely uploaded to and from the victim machine. The open port leaves the victim machine vulnerable to further attacks, Kaspersky Labs warns.

Users are advised patch Windows boxes, update anti-virus signature files and use firewalls to shelter against Plexus and similar irritants that are doubtless just around the corner. Is there no end to this viral madness? ®

OT: AMD Rolls Out New Desktop Athlons

Linux News: Hardware: AMD Rolls Out New Desktop Athlons

VIRUS: Korgo Worm Targets LSASS Flaw

Korgo Worm Targets LSASS Flaw

SEC: Recognition keys access

Recognition keys access TRN 060204: "Passwords are a problem. To be secure, a password must be non-obvious and changed often. Given the number of passwords the average person uses, and given the difficulty of keeping"


By Kimberly Patch, Technology Research News

Passwords are a problem. To be secure, a password must be non-obvious and changed often. Given the number of passwords the average person uses, and given the difficulty of keeping non-obvious and constantly changing passwords straight, it's not surprising that many people don't like them.

Researchers from Hebrew University in Israel are addressing the problem with a scheme that allows people to use a type of password that they don't have to consciously remember.

The scheme taps the way people learn through the instinctive imprinting process. When a person learns information via imprinting, he can recognize the information later but can't recall it in a way he can describe to someone else.

The scheme is fairly secure because it is truly random and cannot be stolen or shared voluntarily, said Scott Kirkpatrick, a professor of engineering and computer science at Hebrew University. "We don't know what we know."

The idea came from thinking about human memory as an inherent one-way function, said Kirkpatrick. A one-way function is a mathematical formula that is easy to solve in one direction but difficult solve in the other. Factoring, for instance, is a common mathematical one-way function. It is easy to multiply the factors of a number together to get the number, but difficult to derive the factors from the number, especially with very large numbers.

The way the human brain deals with complexity can be thought of as a one-way function, according to Kirkpatrick. It stores images with little conscious awareness of what was learned, and are easily recognized but difficult to describe, especially in detail.

The researchers' prototype system involves training a user on a series of images. To be authenticated a user must recognize a few of the images. Pictures, pseudo words and artificial grammar can all be used as items to be recognized. These three types of imprinting data have been thoroughly explored in perception and cognitive psychology literature, Kirkpatrick said.

The researchers tested users on prototype systems that used each of the three types of input.

In tests of the picture version, users went through a two-step process to get a set of user certificates, or unconscious passwords. Users were first shown a set of 100 to 200 pictures randomly selected from a database of 20,000 pictures. Pictures were organized in groups of 2 to 9 pictures with a common theme, and each user was certified on one picture from a given theme group. The user then practiced choosing certificate images from entire theme groups.

Later, in lieu of passwords, users identified most of a short series of certificate images. To guard against eavesdropping, each certificate picture is only used once, and the user retrains when they run low.

Subjects were able to recognize previously seen pictures with better than 90 percent accuracy for up to three months. According to the researchers' calculations, the chances that a user who guesses correctly four times in a row is an imposter is less than 1,000th of one percent.

Picture groups whose individual differences were more distinct were easier to retain over time, and recognition was just as good when picture groups contained six to nine pictures as when they contained just two pictures, according to Kirkpatrick.

In similar tests using pseudo words that are pronounceable in English but do not exist as valid words, accuracy rates varied from 70 to 90 percent over a three-month period. In similar tests using artificial grammar patterns accuracy rates varied more widely, with the best subject achieving a rate of 75 percent.

It is not difficult to make the basic scheme work, but there are challenges in making it practical, said Kirkpatrick. "We're finding many challenges in making the scheme compact, making it possible to use a smaller set of learned images repeatedly without giving the secret away to an eavesdropper, in making training easy and pleasant," he said.

The researchers are working on improving training, on identifying what learned information is most widely accessible, and on identifying variants of the scheme that meet the needs of different security levels, said Kirkpatrick.

Eventually, the method could be used as a part of more elaborate security systems, according to Kirkpatrick.

"I like the idea of developing computer-human interfaces in which the computer is a skeptic [and so] doesn't perform the actions of which it is capable until the human has convinced it that the need is genuine and the human is an appropriate person for whom to perform this action," he said. "This might lead to greater safety for all of us."

The method could be used practically within two years, according to Kirkpatrick. Kirkpatrick's research colleague was Daphna Weinshall. The researchers presented the work at the Computer Human Interaction (CHI) 2004 conference in Vienna, Austria, on April 24 to 29.

LINUX: Red Hat Strengthens Its Global Linux Team

Linux News: Open Source: Red Hat Strengthens Its Global Linux Team

VIRUS: Korgo raises zombie PC army

Korgo raises zombie PC army | The Register

Korgo raises zombie PC army
By John Leyden (john.leyden@theregister.co.uk)
Published Thursday 3rd June 2004 11:05 GMT

Anti-virus firms have raised the peril index of the Korgo worm up a notch following the spread of several new variants this week.

Korgo (http://www.f-secure.com/v-descs/korgo.shtml) (aka Padobot) exploits the Microsoft Windows Local Security Authority Subsystem Service (LSASS) vulnerability (http://www.microsoft.com/technet/security/bulletin/MS04-011.mspx) to spread across vulnerable machines. The same flaw was infamously exploited by the Sasser worm and by a number of less prolific worms (http://www.theregister.co.uk/2004/05/11/sasser_saga_continues) since. Kordo has some nasty tricks up its sleeve but the worm is far less prolific than Sasser.

The worm was written by the Russian Hangup Team virus group, according to Finnish AV firm F-Secure. All seven variants of the worm are very similar.

Korgo-A (and its variants) are written in C++ and is approximately 10KB in size, packed using UPX. When launched, the worm copies itself to the Windows system directory under a random name, and registers this file in the system registry auto-run key. It then begins to randomly scan for further machines to attack on TCP port 445. It also listens on TCP ports 113, 3067, and other random ports allowing hackers backdoor access to infected (zombie) machines. Compromised machines also attempt to connect to several IRC servers to receive commands and transmit data to their controllers.

Once infected, a victim machine will display an error message that the LSASS service has failed, commonly forcing a reboot. Standard defensive precautions apply against all variants of Korgo: patch Windows boxes, update anti-virus signature files and use firewalls. Most Windows users should already have these precautions in place post Sasser. Let's be careful out there. ®

VIRUS: Korgo Worms onto the Net

E-Commerce News: Industry News: Korgo Worms onto the Net: "By Elizabeth Millard E-Commerce Times 06/04/04 10:40 AM PT'My advice for anyone worried about the Korgo worm is to apply the patch, for goodness sake,' Graham Cluley, Sophos seni"

By Elizabeth Millard
E-Commerce Times
06/04/04 10:40 AM PT

"My advice for anyone worried about the Korgo worm is to apply the patch, for goodness sake," Graham Cluley, Sophos senior technology consultant, in an interview with the E-Commerce Times. "How many more alarm calls do you need?"


Take an Enterprise Search & Categorization Multimedia Tour featuring Gartner Analyst Whit Andrews. Hear experts and customers discuss key issues every commercial and government enterprise must consider when implementing enterprise search and categorization solutions.
Click here for the tour.
A virus that was first seen on May 22nd has been worming its way across the Internet, stealing personal information in the process. Dubbed "Korgo," the worm exploits the same vulnerabilities and spreads in the same way as the Sasser worm that caused havoc last month.

Although the virus is not yet widespread, security companies like Symantec (Nasdaq: SYMC) and F-Secure have issued warnings because of Korgo's effectiveness at obtaining personal financial information.
On Friday, Symantec upgraded the threat level due to an increase in submissions.

How the Worm Turns
Reader Tools

E-Mail Article

Online Version

Related Stories

Talkback

Author Search
Officially known as W32.Korgo, the worm can propagate by exploiting a Microsoft Windows vulnerability first announced in mid-April, the Microsoft LSASS buffer overrun vulnerability. The threat affects users of Windows 2000 and Windows XP.

Korgo spreads via the Web, which means that it does not need to be launched by a user, as e-mail viruses do.
Security firm F-Secure has noted that the worm is written by the Russian Hangup Team virus group. There are a number of variants, and the virus is currently up to Korgo.E.

Korgo is one of a number of viruses that have appeared since Microsoft's announcement of the vulnerability. Antivirus firm Sophos has reported that it detected 959 new viruses during May. The company also estimates that there are approximately 90,000 viruses in circulation.
The reason for such a high number of living viruses is that very few ever get exterminated, F-Secure system engineer Tony Magallamez told the E-Commerce Times. "With any infection on the Internet, you see [their] decline," he said, "but they don't die."

Getting It Right
Although there were some reports that the virus contained a key-logger that could capture a user's keystrokes, that information has turned out to be incorrect, said Graham Cluley, Sophos senior technology consultant, in an interview with the E-Commerce Times.

He noted that the virus opens up a backdoor through which a hacker could enter and install a key-logger program undetected; however, Korgo itself does not contain such an application.
Another misleading report that has cropped up is that Korgo is designed specifically to target credit card numbers and passwords, and was created for that purpose.

Magallamez said that the worm is not quite that unique. Like others of its kind, it can harvest any information that is on an infected PC. It also has the ability to connect to an outside server and follow commands from that machine.
"It's not really a groundbreaking virus," said Magallamez. "It's just trying to take advantage of machines that haven't been patched yet."

F-Secure has warned that if Korgo gets onto an unpatched machine, it can be effective at getting personal data like passwords and credit card numbers. But Magallamez noted that the level for worry is low. "It hasn't even reached our second level of alert," he said.
Wake Up Call

Because Korgo exploits the same vulnerability that Sasser did, it only affects those computer users and businesses that did not install a security patch during the Sasser threat.
For this reason, only a small number of users should be affected. "You'll get it if you slept through Sasser," Cluley said. "And of all the worms to sleep through, it's likely not many people missed that one, given all the attention it received."

Cluley added that it is more likely that home users will be more affected than businesses, since they are not as diligent about installing security patches. But even most home users should have taken some action, given the amount of viruses that have appeared since the Sasser worm.
"My advice for anyone worried about the Korgo worm is to apply the patch, for goodness sake," said Cluley. "How many more alarm calls do you need?"

VIRUS: W32.Korgo.A

Symantec Security Response - W32.Korgo.A: "W32.Korgo.A is a worm that attempts to exploit Microsoft LSASS Windows vulnerability, described in Microsoft Security Bulletin MS04-011. The worm also listens on TCP ports 113"

W32.Korgo.A is a worm that attempts to exploit Microsoft LSASS Windows vulnerability, described in Microsoft Security Bulletin MS04-011. The worm also listens on TCP ports 113, 2041, and 3067, and allows unauthorized access to the infected computer.

Note: Virus definitions dated prior to May 23, 2004 detect this threat as Bloodhound.Packed.

Also Known As: Worm.Win32.Padobot.b [Kaspersky], Exploit-Lsass.gen [McAfee]

Type: Worm
Infection Length: 34,880 bytes



Systems Affected: Windows 2000, Windows XP
Systems Not Affected: DOS, Linux, Macintosh, Macintosh OS X, Novell Netware, OS/2, UNIX, Windows 3.x, Windows 95, Windows 98, Windows Me, Windows NT




LINUX: Linux gets trial 'NX' security support

Linux gets trial 'NX' security support | CNET News.com

--[ BOOK REVIEW: Teach Yourself Unix in 24 Hours



# By: Dave Taylor, James C. Armstrong Jr.
# Paperback: 512 pages
# Publisher: SAMS; 2nd edition (December 21, 1998)
# ASIN: 0672314800





Back cover's book description:

Sams Teach Yourself UNIX in 24 Hours, Second Edition is designed to take users from novice to accomplished user in just 24 one-hour sessions. Written by experts in the field, Sams Teach Yourself UNIX in 24 Hours, Second Edition starts off with an introduction to UNIX, then covers file handling, pipes and filters, the vi and EMACS text editors, shells, job control, printing, and connecting to remote computers via the Internet. The book also discusses UNIX programming and includes a handy glossary of terms. This edition of the will have updated coverage of newly available Web browsers, mail and news tools, as well as more focused coverage of the topics readers of the first edition asked about the most.


Teach Yourself Unix in 24 Hours is a newbie's book that gives you an intro about the wonderfully world of Unix. The however book assumes that you already have installed some sort of Unix let it be BSD, Linux, Version 7, or any other flavor, so it really does not cover how to install Unix because that would differ from one flavor to the next. There are many issues that I did have with the book. The first has to be the title which is total nonsense to say the least. The second are the chapters, I believe that the author could have put allot more topics but did not because he was limited by the title (the book says that each chapter should take an hour). The last is the formating which was kind of confusing, the output or the input was sometimes unclear and some of the topics where not really explained as they should have been.

Overall I would have to rate this book a 7 out of 10. This book might be used as a second book to get you hands even dirtier with Unix. There seems to be however a newer edition of the book (edition 3) which might have fixed some of the problems in the book.


--------------------------------------------------------------------
Reviews done by others on the book:

Amazon

ARTICLE: Free Hardware: a Trojan Horse?

Free Hardware: a Trojan Horse?

Recently, Sun Microsystems and Microsoft have been pushing a vision of the future in which computing hardware is free — actually, sold with software subscriptions. IT customers tempted to sign on should take a lesson from the recent box-office bomb Troy and beware of suits bearing gifts. Because it's very clear that this version of free hardware will actually be controlled hardware — controlled, specifically, to effectively seize control of the customer's operation.

At first sight, their pitch might sound attractive. IT shops don't care about hardware or software in isolation; they're customers for a bundled good, the capability to store and process information. Why not rent a bundle with a low monthly cost, hardware and software both serviced by the same vendor and no hardware-to-software integration problems?

There's a clue that something is amiss here in the underlying economics. Hardware has a relatively high fixed unit cost because you have to push bulk matter around to make it and deploy it where it's needed. Software is just bits that can be copied perfectly and fly over a cable at lightspeed. Therefore it appears that a world of free software and costly hardware makes more sense than one of costly software and free hardware. This turns out not to change when you look at the expected investment to produce the first copy of a hardware or software design. So why are Sun and Microsoft reversing the proposition?

The key background fact is that Sun's market share is collapsing under pressure from white-box vendors, and Microsoft is losing its highest-margin corporate customers in places like the financial-services industry to Linux. The Gartner Group reported that Linux server shipments showed 57% growth in 1Q2004 alone. Over the same period, Sun's revenues slumped 12%. The recent Sun-Microsoft pact is a defensive circling of the wagons. Anything that they say has to be evaluated as an attempt to regain the market control they have been losing.

The devil in their Free Hardware story is, at least at first, in the details. Anyone who believes a vendor is going to give away hardware under a contract that allows the customer to immediately strip off the software and repurpose it probably still hasn't faced the truth about the Tooth Fairy. There are really only two scenarios here; either the hardware will be so cheap that the customer would get no gain from the deal, or (if it has real value) the lease contract will have a clause requiring that it be used only with the specified software.

Most likely the whole bundle would be structured as a lease deal — the vendor retains ownership of both hardware and software. And there is the gotcha — because under those assumptions, the vendor wouldn't just own a critical piece of your infrastructure, they'd get an even harder hammerlock on it than traditional proprietary software licenses give them.

Are you doubting this? Then consider where you are if the vendor decides to pile on more restrictions on what you can do with the machine, such as a prohibition on third-party apps and DRM to enforce it. You couldn't very well object. They're only protecting their investment, after all — it's not your property.

Not your property. Owning things may be inconvenient, but without ownership you lose control. Jonathan Schwartz and Bill Gates don't want you to notice that connection; they want you to let their free Trojan horse inside the walls, to lose ownership of your IT infrastructure and give control of it to them. If you make that decision, you'll pay for it in ever more restrictive lock-in and ever-higher future prices.

What's the alternative? Open-source software over commodity hardware; you own it all, and neither Sun nor Microsoft controls a single bit of your information. This latest propaganda initiative of theirs is a desperation maneuver, because if the notoriously conservative Gartner Group's figures are correct IT managers have seen the way out and are escaping the proprietary prison in droves now.

This Trojan horse will very likely be burnt on the beach, as King Priam should have done to Agamemnon's men. One wonders what invasive tactic Sun and Microsoft will think up next.

LINUX: Open Letter to Sun: The Case for Open-Sourcing Java

Open Letter to Sun: The Case for Open-Sourcing Java - OSNews.com

LINK: Brad Abrams

Brad Abrams

LINK: gtk-sharp

gtk-sharp

LINUX: Installing Mono: Running .NET on Linux and Windows by Charlie Calvert

Installing Mono: Running .NET on Linux and Windows by Charlie Calvert

LINK: Mono Weekly News Archives

Mono Weekly News Archives

M$: Microsoft bars Windows pirates

BBC NEWS | Technology | Microsoft bars Windows pirates

OT: First GMail spam from a GMail member

From: Carolina Ramos
To: minombre@gmail.com, philipp.lenssen@gmail.com, craigf@gmail.com, sudhakar@gmail.com, alexb86@gmail.com, skarin@gmail.com, instantenemy@gmail.com, geeknews@gmail.com, correo@gmail.com, stevemays@gmail.com, schwartz@gmail.com, afriguru@gmail.com, d1taylor@gmail.com, highways@gmail.com, jorge.laranjo@gmail.com, andrew@gmail.com, luke.burton@gmail.com, schiano@gmail.com, whoever@gmail.com, mwebinc@gmail.com, themack@gmail.com, dextrous@gmail.com, bazabba@gmail.com, sdelmont@gmail.com, davidtsh@gmail.com, bxgirl@gmail.com, barry.schwartz@gmail.com, tinyfool@gmail.com, mayank.jain@gmail.com, robert@gmail.com, george@gmail.com, superman@gmail.com, sunday@gmail.com, carlbanks@gmail.com, ctaylor@gmail.com, qodsec@gmail.com, thexvr@gmail.com, philoglot@gmail.com, velton@gmail.com, zahrahb@gmail.com, frederick@gmail.com, samantha@gmail.com, incubus@gmail.com, evanescense@gmail.com, myimmortal@gmail.com, cuentas@gmail.com, kamelperez@gmail.com, playboy@gmail.com, yourname@gmail.com, mathowie@gmail.com, jzawodn@gmail.com, freebsd@gmail.com, nintood@gmail.com, mcfarlane@gmail.com, lostlord@gmail.com, georgebush@gmail.com, atomjack@gmail.com, american@gmail.com, gramophone@gmail.com, maryhadda@gmail.com, osamuh@gmail.com, ruchira.datta@gmail.com, pinwiz@gmail.com, kingsley@gmail.com, schnapple@gmail.com, donald@gmail.com, cocacola@gmail.com, mcdonalds@gmail.com, whomever@gmail.com, jakeallen@gmail.com, jeremy@gmail.com, kevinfox@gmail.com, orkutb@gmail.com, compie@gmail.com, wtluke@gmail.com, mark.rogers@gmail.com, joshuadfranklin@gmail.com, gessaman@gmail.com, johnsmith@gmail.com, irina.v@gmail.com, teknorat@gmail.com, french@gmail.com, english@gmail.com, cohenj@gmail.com, mtabed@gmail.com, hellbender@gmail.com, angelica@gmail.com, vuarnet@gmail.com, employer@gmail.com, jhesler@gmail.com, shedevil@gmail.com, shemale@gmail.com, jerevi@gmail.com, account@gmail.com, hacker@gmail.com, jimjacob@gmail.com, jim.jacob@gmail.com, firefox@gmail.com, crazypunk@gmail.com
Date: Sat, 5 Jun 2004 05:50:11 -0300
Subject: Gmail Test 2
Reply | Reply to all | Forward | Print | Trash this message | Show original
Hi I am testing my account of the Gmail, if the Spam functions, no
that I am one, but can use to advantage this message to make Spam :)
I use to advantage to ask for to an account for mine little brother
that he is annoying me because they wants an account too, but I still
cant invite. Who power to invite orders a message for him at
cavemail@terra.com.br
best wishes, Carolina

Oi estou testando o minha conta do Gmail, se o spam funciona nela, nao
que eu seja um, mas posso aproveitar essa mensagem pra fazer spam :)
Aproveito pra pedir uma conta para meu irmãozinho que fica me
aborrecendo porque tambem quer uma conta, mas eu ainda nao posso
convidar ele. Quem poder convidar manda uma mensagem para ele para
cavemail@terra.com.br
beijos, Carolina

Hi estoy probando mi cuenta del Gmail, si funciona el Spam, ningún ese
yo soy uno, pero puedo utilizar a la ventaja este mensaje para hacer
el Spam:) Utilizo a la ventaja preguntar hermano de la mina del
explicar a un pequeño que él es molesto yo porque desean una cuenta
también, pero todavía biselo invito. Quiénes accionan para invitar
órdenes un mensaje para él en recuerdos de cavemail@terra.com.br,
Carolina

Bonjour j'examine mon compte du Gmail, si le Spam fonctionne, aucun ce
j'ai un ans, mais peux employer à l'avantage ce message pour faire le
Spam :) J'emploie à l'avantage pour demander frère de mine
d'explication à un petit qu'il est ennuyant je parce qu'ils veut un
compte aussi, mais je biseaute toujours invite. Qui actionnent pour
inviter des ordres un message pour lui aux meilleurs voeux de
cavemail@terra.com.br, la Caroline

Γεια εξετάζω τον απολογισμό μου του Gmail, εάν το Spam λειτουργεί,
κανενός ότι είμαι ένα, αλλά μπορώ να χρησιμοποιήσω για να ευνοήσω αυτό
το μήνυμα για να κάνω Spam:) Χρησιμοποιώ στο πλεονέκτημα για να ρωτήσω
για σε έναν απολογισμό για το ορυχείο λίγο αδελφό ότι με ενοχλεί
επειδή θέλουν έναν απολογισμό επίσης, αλλά λοξοτομώ ακόμα προσκαλώ.
Ποιοι τροφοδοτούν για να προσκαλέσουν τις διαταγές ένα μήνυμα για τον
στις καλύτερες ευχές cavemail@terra.com.br, Καρολίνα

Hi sto verificando il mio cliente del Gmail, se lo Spam funziona,
nessun quel io sono uno, ma posso usare a vantaggio questo messaggio
per fare lo Spam:) Uso a vantaggio chiedere ad un fratello piccolo
della miniera di rappresent che è annoying me perché desiderano un
cliente anche, ma ancora smusso invito. Chi alimentano per invitare
gli ordini un messaggio per lui agli auguri di cavemail@terra.com.br,
la Carolina

こんにちは私はスパムが作用すればGmail の私の記述を、その私1
才、しかし利点にスパムを作るこのメッセージを使用できるテストしていない:)
私の物を記述にをそれらは記述が余りにほしいと思うが、私はまだ誘う傾くので彼が私迷惑であること小さい兄弟に頼むのに私は利点に使用する。動力を与えるかcavemail@terra.com.br
のよい願いで順序を彼のためのメッセージ誘うためにだれがカロライナ

스팸이 작용하면 안녕 나는Gmail의 나의 계정을, 저 나 이고 것 살, 그러나 이점에 스팸을 만들는 이 메시지를 이용할 수
있는다 시험하고 있다:) 그들이 계정을 너무 원하기 때문에 그가 성가신 나 것과 설명 광산 남동생에게 요구한것을 나는 이점에
사용한다, 그러나 나는 아직도 초대한다 기울n다. 강화하는 까cavemail@terra.com.br최고 호의에 순서를 그를
위해 메시지 초대하기 위하여 누구가, 캐롤라이나

Hallo prüfe ich mein Konto des Gmail, wenn das Spam arbeitet, kein
dieses ich bin eins, aber kann zum Vorteil diese Anzeige benutzen, zum
von von Spam zu bilden:) Ich pflege zum Vorteil, um zu einen
erklärengrube kleinen Bruder zu bitten, daß er ich ärgerlich ist, weil
sie ein Konto auch wünschen, aber ich kippe noch einlade. Wer
antreiben, um Aufträge eine Anzeige für ihn cavemail@terra.com.br an
den besten Wünschen einzuladen, Carolina

LINK: Cprogramming.com Tutorial: Linked Lists

Cprogramming.com Tutorial: Linked Lists

LINK: Google Search: c programming homework

Google Search: c programming homework

Friday, June 04, 2004

LINUX: Ken Brown Re: Copyright

Re: Copyright

LINK: Index of /Linux.old

Index of /Linux.old

LINUX: Linus vs. Tanenbaum

Linus vs. Tanenbaum

Linus vs. Tanenbaum

* Linux is obsolete
o Andy Tanenbaum
+ Microkernel vs Monolithic System
+ Portability
o Linus Benedict Torvalds
o Andy Tanenbaum
o Linus Benedict Torvalds
o Linus Benedict Torvalds
* Unhappy campers
o Andy Tanenbaum
o Fred Fish
o Andy Tanenbaum
o Linus Benedict Torvalds

This is an extract of the discussion between Andy Tanenbaum and Linus Benedict Torvalds about kernel design, free software, and more. Only contributions from the main actors are included. The complete archive is also available, but only in BABYL format. You can use Emacs RMAIL to read it conveniently.

Per Abrahamsen
Linux is obsolete
Andy Tanenbaum

From: ast@cs.vu.nl (Andy Tanenbaum)
Newsgroups: comp.os.minix
Subject: LINUX is obsolete
Date: 29 Jan 92 12:12:50 GMT
Organization: Fac. Wiskunde &h; Informatica, Vrije Universiteit, Amsterdam

I was in the U.S. for a couple of weeks, so I haven't commented much on LINUX (not that I would have said much had I been around), but for what it is worth, I have a couple of comments now.

As most of you know, for me MINIX is a hobby, something that I do in the evening when I get bored writing books and there are no major wars, revolutions, or senate hearings being televised live on CNN. My real job is a professor and researcher in the area of operating systems.

As a result of my occupation, I think I know a bit about where operating are going in the next decade or so. Two aspects stand out:
Microkernel vs Monolithic System
Most older operating systems are monolithic, that is, the whole operating system is a single a.out file that runs in 'kernel mode.' This binary contains the process management, memory management, file system and the rest. Examples of such systems are UNIX, MS-DOS, VMS, MVS, OS/360, MULTICS, and many more.

The alternative is a microkernel-based system, in which most of the OS runs as separate processes, mostly outside the kernel. They communicate by message passing. The kernel's job is to handle the message passing, interrupt handling, low-level process management, and possibly the I/O. Examples of this design are the RC4000, Amoeba, Chorus, Mach, and the not-yet-released Windows/NT.

While I could go into a long story here about the relative merits of the two designs, suffice it to say that among the people who actually design operating systems, the debate is essentially over. Microkernels have won. The only real argument for monolithic systems was performance, and there is now enough evidence showing that microkernel systems can be just as fast as monolithic systems (e.g., Rick Rashid has published papers comparing Mach 3.0 to monolithic systems) that it is now all over but the shoutin`.

MINIX is a microkernel-based system. The file system and memory management are separate processes, running outside the kernel. The I/O drivers are also separate processes (in the kernel, but only because the brain-dead nature of the Intel CPUs makes that difficult to do otherwise). LINUX is a monolithic style system. This is a giant step back into the 1970s. That is like taking an existing, working C program and rewriting it in BASIC. To me, writing a monolithic system in 1991 is a truly poor idea.

Portability
Once upon a time there was the 4004 CPU. When it grew up it became an 8008. Then it underwent plastic surgery and became the 8080. It begat the 8086, which begat the 8088, which begat the 80286, which begat the 80386, which begat the 80486, and so on unto the N-th generation. In the meantime, RISC chips happened, and some of them are running at over 100 MIPS. Speeds of 200 MIPS and more are likely in the coming years. These things are not going to suddenly vanish. What is going to happen is that they will gradually take over from the 80x86 line. They will run old MS-DOS programs by interpreting the 80386 in software. (I even wrote my own IBM PC simulator in C, which you can get by FTP from ftp.cs.vu.nl = 192.31.231.42 in dir minix/simulator.) I think it is a gross error to design an OS for any specific architecture, since that is not going to be around all that long.

MINIX was designed to be reasonably portable, and has been ported from the Intel line to the 680x0 (Atari, Amiga, Macintosh), SPARC, and NS32016. LINUX is tied fairly closely to the 80x86. Not the way to go.

Don`t get me wrong, I am not unhappy with LINUX. It will get all the people who want to turn MINIX in BSD UNIX off my back. But in all honesty, I would suggest that people who want a **MODERN** "free" OS look around for a microkernel-based, portable OS, like maybe GNU or something like that.

Andy Tanenbaum (ast@cs.vu.nl)

P.S. Just as a random aside, Amoeba has a UNIX emulator (running in user space), but it is far from complete. If there are any people who would like to work on that, please let me know. To run Amoeba you need a few 386s, one of which needs 16M, and all of which need the WD Ethernet card.

Linus Benedict Torvalds

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: Re: LINUX is obsolete
Date: 29 Jan 92 23:14:26 GMT
Organization: University of Helsinki

Well, with a subject like this, I'm afraid I'll have to reply. Apologies to minix-users who have heard enough about linux anyway. I'd like to be able to just "ignore the bait", but ... Time for some serious flamefesting!

In article <12595@star.cs.vu.nl> ast@cs.vu.nl (Andy Tanenbaum) writes:

I was in the U.S. for a couple of weeks, so I haven't commented much on LINUX (not that I would have said much had I been around), but for what it is worth, I have a couple of comments now.

As most of you know, for me MINIX is a hobby, something that I do in the evening when I get bored writing books and there are no major wars, revolutions, or senate hearings being televised live on CNN. My real job is a professor and researcher in the area of operating systems.

You use this as an excuse for the limitations of minix? Sorry, but you loose: I've got more excuses than you have, and linux still beats the pants of minix in almost all areas. Not to mention the fact that most of the good code for PC minix seems to have been written by Bruce Evans.

Re 1: you doing minix as a hobby - look at who makes money off minix, and who gives linux out for free. Then talk about hobbies. Make minix freely available, and one of my biggest gripes with it will disappear. Linux has very much been a hobby (but a serious one: the best type) for me: I get no money for it, and it's not even part of any of my studies in the university. I've done it all on my own time, and on my own machine.

Re 2: your job is being a professor and researcher: That's one hell of a good excuse for some of the brain-damages of minix. I can only hope (and assume) that Amoeba doesn't suck like minix does.

1. MICROKERNEL VS MONOLITHIC SYSTEM

True, linux is monolithic, and I agree that microkernels are nicer. With a less argumentative subject, I'd probably have agreed with most of what you said. From a theoretical (and aesthetical) standpoint linux looses. If the GNU kernel had been ready last spring, I'd not have bothered to even start my project: the fact is that it wasn't and still isn't. Linux wins heavily on points of being available now.

MINIX is a microkernel-based system. [deleted, but not so that you miss the point ] LINUX is a monolithic style system.

If this was the only criterion for the "goodness" of a kernel, you'd be right. What you don't mention is that minix doesn't do the micro-kernel thing very well, and has problems with real multitasking (in the kernel). If I had made an OS that had problems with a multithreading filesystem, I wouldn't be so fast to condemn others: in fact, I'd do my damndest to make others forget about the fiasco.

[ yes, I know there are multithreading hacks for minix, but they are hacks, and bruce evans tells me there are lots of race conditions ]

2. PORTABILITY

"Portability is for people who cannot write new programs"
-me, right now (with tongue in cheek)

The fact is that linux is more portable than minix. What? I hear you say. It's true - but not in the sense that ast means: I made linux as conformant to standards as I knew how (without having any POSIX standard in front of me). Porting things to linux is generally /much/ easier than porting them to minix.

I agree that portability is a good thing: but only where it actually has some meaning. There is no idea in trying to make an operating system overly portable: adhering to a portable API is good enough. The very /idea/ of an operating system is to use the hardware features, and hide them behind a layer of high-level calls. That is exactly what linux does: it just uses a bigger subset of the 386 features than other kernels seem to do. Of course this makes the kernel proper unportable, but it also makes for a /much/ simpler design. An acceptable trade-off, and one that made linux possible in the first place.

I also agree that linux takes the non-portability to an extreme: I got my 386 last January, and linux was partly a project to teach me about it. Many things should have been done more portably if it would have been a real project. I'm not making overly many excuses about it though: it was a design decision, and last april when I started the thing, I didn't think anybody would actually want to use it. I'm happy to report I was wrong, and as my source is freely available, anybody is free to try to port it, even though it won't be easy.

Linus

PS. I apologise for sometimes sounding too harsh: minix is nice enough if you have nothing else. Amoeba might be nice if you have 5-10 spare 386's lying around, but I certainly don't. I don't usually get into flames, but I'm touchy when it comes to linux :)

Andy Tanenbaum

From: ast@cs.vu.nl (Andy Tanenbaum)
Newsgroups: comp.os.minix
Subject: Re: LINUX is obsolete
Date: 30 Jan 92 13:44:34 GMT
Organization: Fac. Wiskunde &h; Informatica, Vrije Universiteit, Amsterdam

In article <1992Jan29.231426.20469@klaava.Helsinki.FI> torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds) writes:

You use this [being a professor] as an excuse for the limitations of minix?

The limitations of MINIX relate at least partly to my being a professor: An explicit design goal was to make it run on cheap hardware so students could afford it. In particular, for years it ran on a regular 4.77 MHZ PC with no hard disk. You could do everything here including modify and recompile the system. Just for the record, as of about 1 year ago, there were two versions, one for the PC (360K diskettes) and one for the 286/386 (1.2M). The PC version was outselling the 286/386 version by 2 to 1. I don't have figures, but my guess is that the fraction of the 60 million existing PCs that are 386/486 machines as opposed to 8088/286/680x0 etc is small. Among students it is even smaller. Making software free, but only for folks with enough money to buy first class hardware is an interesting concept. Of course 5 years from now that will be different, but 5 years from now everyone will be running free GNU on their 200 MIPS, 64M SPARCstation-5.

Re 2: your job is being a professor and researcher: That's one hell of a good excuse for some of the brain-damages of minix. I can only hope (and assume) that Amoeba doesn't suck like minix does.

Amoeba was not designed to run on an 8088 with no hard disk.

If this was the only criterion for the "goodness" of a kernel, you'd be right. What you don't mention is that minix doesn't do the micro-kernel thing very well, and has problems with real multitasking (in the kernel). If I had made an OS that had problems with a multithreading filesystem, I wouldn't be so fast to condemn others: in fact, I'd do my damndest to make others forget about the fiasco.

A multithreaded file system is only a performance hack. When there is only one job active, the normal case on a small PC, it buys you nothing and adds complexity to the code. On machines fast enough to support multiple users, you probably have enough buffer cache to insure a hit cache hit rate, in which case multithreading also buys you nothing. It is only a win when there are multiple processes actually doing real disk I/O. Whether it is worth making the system more complicated for this case is at least debatable.

I still maintain the point that designing a monolithic kernel in 1991 is a fundamental error. Be thankful you are not my student. You would not get a high grade for such a design :-)

The fact is that linux is more portable than minix. What? I hear you say. It's true - but not in the sense that ast means: I made linux as conformant to standards as I knew how (without having any POSIX standard in front of me). Porting things to linux is generally /much/ easier than porting them to minix.

MINIX was designed before POSIX, and is now being (slowly) POSIXized as everyone who follows this newsgroup knows. Everyone agrees that user-level standards are a good idea. As an aside, I congratulate you for being able to write a POSIX-conformant system without having the POSIX standard in front of you. I find it difficult enough after studying the standard at great length.

My point is that writing a new operating system that is closely tied to any particular piece of hardware, especially a weird one like the Intel line, is basically wrong. An OS itself should be easily portable to new hardware platforms. When OS/360 was written in assembler for the IBM 360 25 years ago, they probably could be excused. When MS-DOS was written specifically for the 8088 ten years ago, this was less than brilliant, as IBM and Microsoft now only too painfully realize. Writing a new OS only for the 386 in 1991 gets you your second 'F' for this term. But if you do real well on the final exam, you can still pass the course.

Prof. Andrew S. Tanenbaum (ast@cs.vu.nl)
Linus Benedict Torvalds

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: Re: LINUX is obsolete
Date: 31 Jan 92 10:33:23 GMT
Organization: University of Helsinki

In article <12615@star.cs.vu.nl> ast@cs.vu.nl (Andy Tanenbaum) writes:

The limitations of MINIX relate at least partly to my being a professor: An explicit design goal was to make it run on cheap hardware so students could afford it.

All right: a real technical point, and one that made some of my comments inexcusable. But at the same time you shoot yourself in the foot a bit: now you admit that some of the errors of minix were that it was too portable: including machines that weren't really designed to run unix. That assumption lead to the fact that minix now cannot easily be extended to have things like paging, even for machines that would support it. Yes, minix is portable, but you can rewrite that as "doesn't use any features", and still be right.

A multithreaded file system is only a performance hack.

Not true. It's a performance hack /on a microkernel/, but it's an automatic feature when you write a monolithic kernel - one area where microkernels don't work too well (as I pointed out in my personal mail to ast). When writing a unix the "obsolete" way, you automatically get a multithreaded kernel: every process does it's own job, and you don't have to make ugly things like message queues to make it work efficiently.

Besides, there are people who would consider "only a performance hack" vital: unless you have a cray-3, I'd guess everybody gets tired of waiting on the computer all the time. I know I did with minix (and yes, I do with linux too, but it's /much/ better).

I still maintain the point that designing a monolithic kernel in 1991 is a fundamental error. Be thankful you are not my student. You would not get a high grade for such a design :-)

Well, I probably won't get too good grades even without you: I had an argument (completely unrelated - not even pertaining to OS's) with the person here at the university that teaches OS design. I wonder when I'll learn :)

My point is that writing a new operating system that is closely tied to any particular piece of hardware, especially a weird one like the Intel line, is basically wrong.

But /my/ point is that the operating system /isn't/ tied to any processor line: UNIX runs on most real processors in existence. Yes, the /implementation/ is hardware-specific, but there's a HUGE difference. You mention OS/360 and MS-DOG as examples of bad designs as they were hardware-dependent, and I agree. But there's a big difference between these and linux: linux API is portable (not due to my clever design, but due to the fact that I decided to go for a fairly- well-thought-out and tested OS: unix.)

If you write programs for linux today, you shouldn't have too many surprises when you just recompile them for Hurd in the 21st century. As has been noted (not only by me), the linux kernel is a miniscule part of a complete system: Full sources for linux currently runs to about 200kB compressed - full sources to a somewhat complete developement system is at least 10MB compressed (and easily much, much more). And all of that source is portable, except for this tiny kernel that you can (provably: I did it) re-write totally from scratch in less than a year without having /any/ prior knowledge.

In fact the /whole/ linux kernel is much smaller than the 386-dependent things in mach: i386.tar.Z for the current version of mach is well over 800kB compressed (823391 bytes according to nic.funet.fi). Admittedly, mach is "somewhat" bigger and has more features, but that should still tell you something.

Linus
Linus Benedict Torvalds

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: Apologies (was Re: LINUX is obsolete)
Date: 30 Jan 92 15:38:16 GMT
Organization: University of Helsinki

In article <1992Jan29.231426.20469@klaava.Helsinki.FI> I wrote:

Well, with a subject like this, I'm afraid I'll have to reply.

And reply I did, with complete abandon, and no thought for good taste and netiquette. Apologies to ast, and thanks to John Nall for a friendy "that's not how it's done"-letter. I over-reacted, and am now composing a (much less acerbic) personal letter to ast. Hope nobody was turned away from linux due to it being (a) possibly obsolete (I still think that's not the case, although some of the criticisms are valid) and (b) written by a hothead :-)

Linus "my first, and hopefully last flamefest" Torvalds
Unhappy campers
Andy Tanenbaum

From: ast@cs.vu.nl (Andy Tanenbaum)
Newsgroups: comp.os.minix
Subject: Unhappy campers
Date: 3 Feb 92 22:46:40 GMT
Organization: Fac. Wiskunde &h; Informatica, Vrije Universiteit, Amsterdam

I've been getting a bit of mail lately from unhappy campers. (Actually 10 messages from the 43,000 readers may seem like a lot, but it is not really.) There seem to be three sticking points:

1. Monolithic kernels are just as good as microkernels
2. Portability isn't so important
3. Software ought to be free

If people want to have a serious discussion of microkernels vs. monolithic kernels, fine. We can do that in comp.os.research. But please don't sound off if you have no idea of what you are talking about. I have helped design and implement 3 operating systems, one monolithic and two micro, and have studied many others in detail. Many of the arguments offered are nonstarters (e.g., microkernels are no good because you can't do paging in user space-- except that Mach DOES do paging in user space).

If you don't know much about microkernels vs. monolithic kernels, there is some useful information in a paper I coauthored with Fred Douglis, Frans Kaashoek and John Ousterhout in the Dec. 1991 issue of COMPUTING SYSTEMS, the USENIX journal). If you don't have that journal, you can FTP the paper from ftp.cs.vu.nl (192.31.231.42) in directory amoeba/papers as comp_sys.tex.Z (compressed TeX source) or comp_sys.ps.Z (compressed PostScript). The paper gives actual performance measurements and supports Rick Rashid's conclusion that microkernel based systems are just as efficient as monolithic kernels.

As to portability, there is hardly any serious discussion possible any more. UNIX has been ported to everything from PCs to Crays. Writing a portable OS is not much harder than a nonportable one, and all systems should be written with portability in mind these days. Surely Linus' OS professor pointed this out. Making OS code portable is not something I invented in 1987.

While most people can talk rationally about kernel design and portability, the issue of free-ness is 100% emotional. You wouldn't believe how much [expletive deleted] I have gotten lately about MINIX not being free. MINIX costs $169, but the license allows making two backup copies, so the effective price can be under $60. Furthermore, professors may make UNLIMITED copies for their students. Coherent is $99. FSF charges >$100 for the tape its "free" software comes on if you don't have Internet access, and I have never heard anyone complain. 4.4 BSD is $800. I don't really believe money is the issue. Besides, probably most of the people reading this group already have it.

A point which I don't think everyone appreciates is that making something available by FTP is not necessarily the way to provide the widest distribution. The Internet is still a highly elite group. Most computer users are NOT on it. It is my understanding from PH that the country where MINIX is most widely used is Germany, not the U.S., mostly because one of the (commercial) German computer magazines has been actively pushing it. MINIX is also widely used in Eastern Europe, Japan, Israel, South America, etc. Most of these people would never have gotten it if there hadn't been a company selling it.

Getting back to what "free" means, what about free source code? Coherent is binary only, but MINIX has source code, just as LINUX does. You can change it any way you want, and post the changes here. People have been doing that for 5 years without problems. I have been giving free updates for years, too.

I think the real issue is something else. I've been repeatedly offered virtual memory, paging, symbolic links, window systems, and all manner of features. I have usually declined because I am still trying to keep the system simple enough for students to understand. You can put all this stuff in your version, but I won't put it in mine. I think it is this point which irks the people who say "MINIX is not free," not the $60.

An interesting question is whether Linus is willing to let LINUX become "free" of his control. May people modify it (ruin it?) and sell it? Remember the hundreds of messages with subject "Re: Your software sold for money" when it was discovered the MINIX Centre in England was selling diskettes with news postings, more or less at cost?

Suppose Fred van Kempen returns from the dead and wants to take over, creating Fred's LINUX and Linus' LINUX, both useful but different. Is that ok? The test comes when a sizable group of people want to evolve LINUX in a way Linus does not want. Until that actually happens the point is moot, however.

If you like Linus' philosophy rather than mine, by all means, follow him, but please don't claim that you're doing this because LINUX is "free." Just say that you want a system with lots of bells and whistles. Fine. Your choice. I have no argument with that. Just tell the truth.

As an aside, for those folks who don't read news headers, Linus is in Finland and I am in The Netherlands. Are we reaching a situation where another critical industry, free software, that had been totally dominated by the U.S. is being taken over by the foreign competition? Will we soon see President Bush coming to Europe with Richard Stallman and Rick Rashid in tow, demanding that Europe import more American free software?

Andy Tanenbaum (ast@cs.vu.nl)
Fred Fish

From: fnf@fishpond.uucp (Fred Fish)
Newsgroups: comp.os.minix
Subject: Re: Unhappy campers
Date: 4 Feb 92 20:57:40 GMT
Organization: Amiga Library Distribution Services

In article <12667@star.cs.vu.nl> ast@cs.vu.nl (Andy Tanenbaum) writes:

While most people can talk rationally about kernel design and portability, the issue of free-ness is 100% emotional. You wouldn't believe how much [expletive deleted] I have gotten lately about MINIX not being free. MINIX costs $169, but the license allows making two backup copies, so the effective price can be under $60. Furthermore, professors may make UNLIMITED copies for their students. Coherent is $99. FSF charges >$100 for the tape its "free" software comes on if you don't have Internet access, and I have never heard anyone complain. 4.4 BSD is $800. I don't really believe money is the issue. Besides, probably most of the people reading this group already have it.

The distribution cost is not the problem. As you've noted, nobody complains about the FSF's distribution fee being too high. The problem, as I see it, is that there is only one legal source for for the software for people that simply want a working release. And from watching the minix group since minix first became available, my impression is that nobody enjoys dealing with PH for a whole host of reasons.

I think the real issue is something else. I've been repeatedly offered virtual memory, paging, symbolic links, window systems, and all manner of features. I have usually declined because I am still trying to keep the system simple enough for students to understand. You can put all this stuff in your version, but I won't put it in mine. I think it is this point which irks the people who say "MINIX is not free," not the $60.

If PH was not granted a monopoly on distribution, it would have been possible for all of the interested minix hackers to organize and set up a group that was dedicated to producing enhanced-minix. This aim of this group could have been to produce a single, supported version of minix with all of the commonly requested enhancements. This would have allowed minix to evolve in much the same way that gcc has evolved over the last few years. Sure there are variant versions of gcc, but most of the really good enhancements, bug fixes, etc are eventually folded back into a master source base that future distributions derive from. Thus you would have been left in peace to continue your tight control over the educational version of minix, and everyone else that wanted more than an educational tool could put their energies into enhanced-minx.

The primary reason I've never gotten into using minix, after the initial excitement of hearing about it's availability way back when, is that I have no interest in trying to apply random patches from all over the place, sort out the problems, and eventually end up with a system that does what I want it to, but which I can't pass on to anyone else.

The test comes when a sizable group of people want to evolve LINUX in a way Linus does not want. Until that actually happens the point is moot, however.

Where is the sizeable group of people that want to evolve gcc in a way that rms/FSF does not approve of?

Where is the sizeable group of people that want to evolve emacs in a way that rms/FSF doesn't approve of?

I'd say that if the primary maintainers of a large piece of useful, freely redistributable, software are at all responsive to incorporating useful enhancements and acting as the central repository and clearing house for the software, then these splinter groups simply do not have sufficient motivation to form. Having a single source for the software, and having the primary maintainer(s) be unresponsive to the desires of a large group of users, is the catalyst that causes these sorts of pressures; not the freedom of the software.

-Fred

--
|\/ o\ Fred Fish, 1835 E. Belmont Drive, Tempe, AZ 85284, USA
|/\__/ 1-602-491-0048 {asuvax,mcdphx,cygint,amix}!fishpond!fnf

Andy Tanenbaum

From: ast@cs.vu.nl (Andy Tanenbaum)
Newsgroups: comp.os.minix
Subject: Re: Unhappy campers
Date: 5 Feb 92 23:23:26 GMT
Organization: Fac. Wiskunde &h; Informatica, Vrije Universiteit, Amsterdam

In article <205@fishpond.uucp> fnf@fishpond.uucp (Fred Fish) writes:

If PH was not granted a monopoly on distribution, it would have been possible for all of the interested minix hackers to organize and set up a group that was dedicated to producing enhanced-minix. This aim of this group could have been to produce a single, supported version of minix with all of the commonly requested enhancements. This would have allowed minix to evolve in much the same way that gcc has evolved over the last few years.

This IS possible. If a group of people wants to do this, that is fine. I think co-ordinating 1000 prima donnas living all over the world will be as easy as herding cats, but there is no legal problem. When a new release is ready, just make a diff listing against 1.5 and post it or make it FTPable. While this will require some work on the part of the users to install it, it isn't that much work. Besides, I have shell scripts to make the diffs and install them. This is what Fred van Kempen was doing. What he did wrong was insist on the right to publish the new version, rather than diffs against the PH baseline. That cuts PH out of the loop, which, not surprisingly, they weren't wild about. If people still want to do this, go ahead.

Of course, I am not necessarily going to put any of these changes in my version, so there is some work keeping the official and enhanced ones in sync, but I am willing to co-operate to minimize work. I did this for a long time with Bruce Evans and Frans Meulenbroeks.

If Linus wants to keep control of the official version, and a group of eager beavers want to go off in a different direction, the same problem arises. I don't think the copyright issue is really the problem. The problem is co-ordinating things. Projects like GNU, MINIX, or LINUX only hold together if one person is in charge. During the 1970s, when structured programming was introduced, Harlan Mills pointed out that the programming team should be organized like a surgical team--one surgeon and his or her assistants, not like a hog butchering team--give everybody an axe and let them chop away.

Anyone who says you can have a lot of widely dispersed people hack away on a complicated piece of code and avoid total anarchy has never managed a software project.

Where is the sizeable group of people that want to evolve gcc in a way that rms/FSF does not approve of?

A compiler is not something people have much emotional attachment to. If the language to be compiled is a given (e.g., an ANSI standard), there isn't much room for people to invent new features. An operating system has unlimited opportunity for people to implement their own favorite features.

Andy Tanenbaum (ast@cs.vu.nl)
Linus Benedict Torvalds

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: Re: Unhappy campers
Date: 6 Feb 92 10:33:31 GMT
Organization: University of Helsinki

In article <12746@star.cs.vu.nl> ast@cs.vu.nl (Andy Tanenbaum) writes:

If Linus wants to keep control of the official version, and a group of eager beavers want to go off in a different direction, the same problem arises.

This is the second time I've seen this "accusation" from ast, who feels pretty good about commenting on a kernel he probably haven't even seen. Or at least he hasn't asked me, or even read alt.os.linux about this. Just so that nobody takes his guess for the full thruth, here's my standing on "keeping control", in 2 words (three?):

I won't.

The only control I've effectively been keeping on linux is that I know it better than anybody else, and I've made my changes available to ftp-sites etc. Those have become effectively official releases, and I don't expect this to change for some time: not because I feel I have some moral right to it, but because I haven't heard too many complaints, and it will be a couple of months before I expect to find people who have the same "feel" for what happens in the kernel. (Well, maybe people are getting there: tytso certainly made some heavy changes even to 0.10, and others have hacked it as well)

In fact I have sent out feelers about some "linux-kernel" mailing list which would make the decisions about releases, as I expect I cannot fully support all the features that will /have/ to be added: SCSI etc, that I don't have the hardware for. The response has been non-existant: people don't seem to be that eager to change yet. (well, one person felt I should ask around for donations so that I could support it - and if anybody has interesting hardware lying around, I'd be happy to accept it :)

The only thing the copyright forbids (and I feel this is eminently reasonable) is that other people start making money off it, and don't make source available etc... This may not be a question of logic, but I'd feel very bad if someone could just sell my work for money, when I made it available expressly so that people could play around with a personal project. I think most people see my point.

That aside, if Fred van Kempen wanted to make a super-linux, he's quite wellcome. He won't be able to make much money on it (distribution fee only), and I don't think it's that good an idea to split linux up, but I wouldn't want to stop him even if the copyright let me.

I don't think the copyright issue is really the problem. The problem is co-ordinating things. Projects like GNU, MINIX, or LINUX only hold together if one person is in charge.

Yes, coordination is a big problem, and I don't think linux will move away from me as "head surgeon" for some time, partly because most people understand about these problems. But copyright /is/ an issue: if people feel I do a bad job, they can do it themselves. Likewise with gcc. The minix copyright, however, means that if someone feels he could make a better minix, he either has to make patches (which aren't that great whatever you say about them) or start off from scratch (and be attacked because you have other ideals).

Patches aren't much fun to distribute: I haven't made cdiffs for a single version of linux yet (I expect this to change: soon the patches will be so much smaller than the kernel that making both patches and a complete version available is a good idea - note that I'd still make the whole version available too). Patches upon patches are simply impractical, especially for people that may do changes themselves.

Where is the sizeable group of people that want to evolve gcc in a way that rms/FSF does not approve of?

A compiler is not something people have much emotional attachment to. If the language to be compiled is a given (e.g., an ANSI standard), there isn't much room for people to invent new features. An operating system has unlimited opportunity for people to implement their own favorite features.

Well, there's GNU emacs... Don't tell us people haven't got emotional attachment to editors :)

Linus

GROKLAW: Dennis Ritchie's Interview for Samizdat

GROKLAW: "Tuesday, June 01 2004 @ 09:25 PM EDTDennis Ritchie was mentioned in the AdTI press release about Ken Brown's forthcoming book, 'Samizdat', the book attacking Linus for not 'inven"

Tuesday, June 01 2004 @ 09:25 PM EDT

Dennis Ritchie was mentioned in the AdTI press release about Ken Brown's forthcoming book, "Samizdat", the book attacking Linus for not "inventing" Linux. As you will recall, the press release said the author's book was based on "extensive interviews" with Richard Stallman, Dennis Ritchie and Andrew Tanenbaum:

"In one of the few extensive studies on the source of open source code, Kenneth Brown, president of AdTI, traces the free software movement over three decades -- from its romantic but questionable beginnings, through its evolution to a commercial effort that draws on unpaid contributions from thousands of programmers. Brown's account is based on extensive interviews with more than two dozen leading technologists including Richard Stallman, Dennis Ritchie, and Andrew Tanenbaum."

Today, Dr. Ritchie told me that was overstating it, at least with respect to himself:

"I think that the teaser for it, mentioning extensive interviews with me among others, is overblown in my case. Brown sent an initial (email) probe asking for an interview, in response to which we invited AdTI to send some sample questions (which I answered). This happened just before Brown's visit to Tanenbaum. The only other interaction was a brief phone call from a staffer who asked only about a couple of fact things: how many lines of code in some early kernel, what date was it released."

So in his case, the "extensive interviews" consisted of one email. Here it is in its entirety, with his prologue to me, published with his permission. You will see Brown repeatedly trying to elicit negative responses from Ritchie, who replies at one point: "the specifications for Unix were always quite open". Mr. Brown, therefore, put out a press release saying something very different from what he was told by Dennis Ritchie, from my reading. Our thanks to Dr. Ritchie for putting this information on the public record.

*****************************

Brown sent an initial (email) probe asking for an interview, in response to which we invited AdTI to send some sample questions (which I answered). This happened just before Brown's visit to Tanenbaum. The only other interaction was a brief phone call from a staffer who asked only about a couple of fact things: how many lines of code in some early kernel, what date was it released.

The main communication was this, from me to Brown:

====

1) Tell me what the environment was like after ATT/Western Electric decided that they didn't want the Lyon's book around. How did you react to the Lyon's Book and its subsequent recall?

We in the research group reacted with great pleasure to Lions's book; it was very well done. Indeed the early Unix Support Group (that became USL etc.) were pleased as well, and in fact invited Lions for a couple of stays with them to help annotate more documentation. Lions also visited us later in the research group and did some annotation on early Plan 9.

What was decided from the licensing point of view (after 6th edition) was that teaching whole classes from the source might be worrisome, and I think he was asked not to continue doing this. The original book was never officially published except within UNSW for his course; however it was reprinted by AT&T for internal purposes and it was also was made available to AT&T/WEco Unix licensees. Lions was disappointed that things changed so that he couldn't teach courses from a newer edition, but I don't recall any animus.

As you may know, the 6th edition source and Lions's commentary about it was formally reprinted and published by Peer-to-Peer, which seems to have morphed into Annabooks, in 1997. Of course this was after much water had passed under the bridge, in particular after the earlier SCO had bought the rights from Novell, so it was SCO who approved this publication.

2) People say that the earliest version of Linux 1.0 had a very similar resemblance, line for line to Unix? What did you recall?

I have no idea whether this is true, since I've never looked, but I doubt it.

3) Have you ever discussed the Minix/Linux migration or any other topic with Professor Tannenbaum in Finland? What are your thoughts about his decision to create Minix based on Unix, regardless of the efforts by ATT to restrict its use?

Since you've visited him, you know that Andrew Tanenbaum was and is at VU in Amsterdam. About the similarity and "restriction": the specifications for Unix were always quite open. Tanenbaum, in Minix, wanted to use the specification but with his own outlook for didactic, research and release purposes. I don't think AT&T ever bothered him. He (like Lions) spent time with us more than once during the writing of his later books.

4) There is something missing in the legal history of Unix to Minix to Linux to me. Help me understand a few things:

1) ATT was obviously angry that Unix code was going everywhere at once. Did they look at Linux? Did they look at Minix? Why didn't they feel either products were copyright violations?

I don't think that they were seriously looked at, and I don't think that AT&T was angry. Both Minix earlier and Linux a bit later took pains to avoid copyright issues and use documented and, by that time, probably internationally standardized interfaces. In the case of Minix especially, the accent was also clearly in the research/academic area as opposed to commercial.

By the time they came along there was plenty of material like the SVID, Sys V Interface Definition, Maurice Bach's book and so on. AT&T and USL generally encouraged such public efforts. I can't recall the dates, but, for example, the company contributed to the IEEE POSIX standards effort.

2) In your opinion, why did ATT lose its case against Berkeley?

This is the one case where USL did get angry enough to go to court. You will have to read the court's decision about it to see why USL lost (or at least didn't get what they wanted). In the event, the crucial decision by the court was to deny an immediate injunction against BSDi and UCB. Probably you've seen it, but the decision is at http://cm.bell-labs.com/cm/cs/who/dmr/bsdi/930303.ruling.txt and it's probably best to let it speak for itself.

3) In my opinion, you wrote Unix (UNICS) from scratch. In my opinion, Linus Torvalds did NOT write Linux from scratch. What is you opinion? How much did he write? I talked to a Finnish programmer that insists that Linus had the Unix code (the Lyon's Book) and Minix code. Without those two, who could not have even come close to writing Linux. I hate to ask such a bare-knuckle question, but I really feel that this part of history is very gray.

Say what you feel is ok to say.

We did indeed write Unix from scratch (though with intellectual influence from aspects of Multics and other systems). I don't know what Torvalds started with or what he had read. It seems plausible from his writings that he was distancing himself more from Minix than Unix as such. See, for example, http://people.fluidsignal.com/~luferbu/misc/Linus_vs_Tanenbaum.html

4) Could I get a copy of the original version of Unix that was released? My team is comparing Linux 1.0, Minix 1.0 and your first versions. If you can help with this, let me know.

We have only parts of the earliest releases. www.tuhs.org has a good collection of what is available, including a version of the kernel from ca. 1973, and also partially complete renditions of 5th through 7th editions. The earliest available material (from 1973) is at http://minnie.tuhs.org/UnixTree/Nsys/sys/nsys/

Dennis

LINUX: Samizdat's critics... Brown replies

Samizdat's critics... Brown replies: "Kenneth Brown June 4, 2004 Copyright � Alexis de Tocqueville Institution Return to Samizdat updates page. Return to AdTI home page. Email Kenneth Brown.Samizdat is a series of ex"

Kenneth Brown
June 4, 2004
Copyright © Alexis de Tocqueville Institution

Return to Samizdat updates page.
Return to AdTI home page.
Email Kenneth Brown.


Samizdat is a series of excerpts from an upcoming book on open source and operating systems that will be published later this year. AdTI did not publish Samizdat with the expectation that rabidly pro-Linux developers would embrace it. Its purpose is to provide U.S. leadership with a researched presentation on attribution and intellectual property problems with the hybrid source code model, particularly Linux. It is our hope that leadership would find this document helpful with public policy decisions regarding its future investment in Linux and other hybrid source products.

The United States is the home of the United States Patent and Trademark Office, an internationally respected agency which contributes to the worldwide effort to protect and govern intellectual property. In addition, the U.S. government is one of the largest patent holders in the world, owning the rights to 20-30,000 patents. Annually, the U.S. government also contributes billions to hi-tech research and development because research and development supports our intellectual property economy. Therefore, it is in the U.S. government’s best interest to fully understand the impact of Linux on the intellectual property foundation of our country, as well as the entire information technology (IT) sector.

True Open Source vs. Hybrid Source

The Samizdat report recommends that the U.S. government should invest $5 billion in research and development efforts that produce true open source products, such as BSD and MIT license-based open source. Government investment in open source development will accelerate innovation. However, increased investment should be in true open source, open source without any stipulations, other than attribution and copyright notification, not hybrid source.

"Hybrid source code" is a phrase coined by former Tocqueville Chairman Gregory Fossedal. The term refers to any product with a license that attempts to mix free and proprietary source code at the same time.

While hybrid software appears to be the same as open source, it isn’t. Hybrid source code can never be true intellectual property. The actual purpose of hybrid source is to nullify its value as private property, which makes the hybrid source model significantly different from true open source. Noone can ever truly accrue any value from owning hybrid source software, because everybody (and anybody) has the rights to every line of improvement in it. Worse, many argue that if hybrid source is used the wrong way, it can make other source code hybrid source as well.

The hybrid source model negatively impacts the intellectual property model for all software, and inevitably the entire IT economy. As long as the value of the IT economy is dependent on the preservation of intellectual property, it is counterproductive for the U.S. government to invest in Linux.

Linux is Inherently Unstable

The disturbing reality is that the hybrid source model depends heavily upon sponging talent from U.S. corporations and/or U.S. proprietary software. Much of this questionable borrowing is a) not in the best interest U.S. corporations b) not in the best interest of IT workers in America c) at a serious expense to the investment community, an entity betting on the success of intellectual property in the marketplace.

Linux is a leprosy; and is having a deleterious effect on the U.S. IT industry because it is steadily depreciating the value of the software industry sector. Software is also embedded in hardware, chips, printers and even consumer electronics. Should embedded software become 'free' too, it would be natural to conclude the value of hardware will spiral downward as well.

In Samizdat, AdTI argues that the inherent instability of hybrid source development such as Linux is due in great part to its inability to provide a sound policy for originating source code without attribution or IP problems. Within two days of AdTI’s release of Samizdat, OSDL(1) member Linus Torvalds affirmed AdTI’s concerns, announcing that Linux kernel contributions depend largely on ‘trust’. In an attempt to fix the system, Linus Torvalds announced an ambiguous policy(2) to promote better ‘trust’.

Samizdat concludes that the root of attribution, IP misappropriation, and acknowledgement problems in Linux is ---in fact--- the trust model. Basically, Torvalds and other Linux advocates are admitting to using a ‘three monkeys’ policy for software development: see no evil, speak no evil, hear no evil. Specifically, Torvalds and the Linux kernel management team accept blind source code contributions. Then, they ask for a certification. But the certification does not hold the contributor, the Linux community, or Torvalds legally accountable. Nor does it guarantee that the source is produced in a 'clean room'. Meanwhile users are left to just 'trust' Linux too, legally left to face the ramifications of any significant legal problems. This is a 'wishful thinking' policy, and is not a sound approach for software development. The reality is that, noone, including Linus Torvalds, can ever guarantee that code in the Linux kernel is free of counter ownership, or attribution claims. AdTI suggests that the U.S. government should buy and invest in software from a confirmable entity, not from an assortment of unconfirmable sources. AdTI is certain that inevitably, some unfortunate user of Linux will be facing an incalculable legal problem.

Meanwhile, we should also very plainly ask, “who[m] are we trusting?”

In a controversial section of Samizdat, I ask readers to pose some very hard questions about the origin of the Linux kernel. This is for a number of reasons, but especially because the same people that are selling the trust model cannot answer basic questions about what attribution, acknowledgement, and IP credit they may have owed ATT Corporation and/or Prentice Hall Corporation in 1991 when the Linux kernel was introduced. The same community that sells ‘trust’, is the same community that celebrates: the theft of ATT Unix source code in the late 70’s, joked about the theft of Windows source code in February, and commenting on the Cisco source code theft in May wrote in Newsforge, “maybe the theft will be a good enough reason for Cisco customers to check out open source alternatives….(3)”

Isn’t fair to question the character and ethics of individuals that espouse contempt for intellectual property? Isn’t fair to question their character, when the core of their business strategy is trust?

Interviews for Samizdat

"…He says Linus couldn't possibly have written that much code," said Tanenbaum. "But there's tremendous variation from programmer to programmer-- some research I saw says maybe as high as 30 to 1 for great programmers and poor ones -- and Linus could easily be in the top 10 percent or top 1 percent of all programmers…."

Lisa Stapleton, Linux Insider, May 21, 2004

Reasonable Doubt

AdTI and Tanenbaum do agree on one point: the Linux kernel is an incredible, but conspicuous accomplishment. Noone seemed to be interested in critiquing it. So subsequently, AdTI decided to look into this, because we agreed it was no average feat. We collected evidence and looked at it a dozen different ways. Afterwards, we humbly concluded that the story in the public record about Torvalds and the Linux kernel is questionable. Thus, we published some of the facts we came up with, so readers could analyze the story for themselves.

To write Samizdat, I worked with (and quoted) many individuals directly or indirectly familiar with Linux development. AdTI will continue to interview people within the open source profession about open source. It would be skewed and bias to only quote people that are anti-Linux or anti-open source. I have done this for years, and will continue to do so, regardless of what a source thinks of my theories.

As many are aware, I interviewed Professor Tanenbaum, the author of Minix, a copyright protected property by Prentice Hall. On March 8, 2004, Professor Tanenbaum sent me the following e-mail:

“MINIX was the base that Linus used to create Linux. He also took many ideas from MINIX, including the file system, source tree, and much more.(4)”

I met with Professor Tanenbaum not to write a treatise on software engineering, but to discuss the issue of software product rights and protection that he brought up in his email. In an interview with Tanenbaum, it became immediately noticeable that the professor was an animated, but tense individual about the topic of rights and attribution. He felt that well-known facts about Minix/Linux development should not have to be questioned. It was clear that he was very conflicted, and probably sorry that he sent the email in the first place.

Ironically, Professor Tanenbaum's recent comments only recapitulate many of the substantive contradictions regarding the early Linux kernel AdTI decided to discuss in Samizdat. I met with Professor Tanenbaum with the hope of resolving some of these inconsistent and contradictory accounts in the public record.

Question #1

Is it likely that a student (Linus Torvalds) with no operating systems experience, a non-Unix licensee, without any use of Minix or Unix source code, could build a functioning kernel in six months -- whereas it took you (Tanenbaum) three years to build Minix?

In Tanenbaum’s recent posts(5), he argues (as he told me) that there are "others" that have created Unix clones or operating systems within the same constraints. Tanenbaum’s argumentation only increased our doubt about the Torvalds story because the comparisons were too unbelievable. For us to accept Tanenbaum’s argument, Linus Torvalds at 21, with one year of C programming, was Doug Comer, an accomplished computer scientist, or smarter than the Coherent team, and of course a better programmer than the good professor too.

Tanenbaum told us about the Coherent project repeatedly, but it was easy to research that it was a completely different situation. It wasn’t a solo effort, it was a team. Second, the timeline was wrong. Tanenbaum told us it took two years, then corrected himself on his own website writing it took six years. Either way, it wasn’t six months. On his website, it seems now Tanenbaum is comparing the inventors of Unix, Dennis Ritchie, and Kenneth Thompson to Torvalds. This comparison if anything should demonstrate why AdTI was just not very convinced by the professor. Both Ritchie and Thompson had exceptional familiarity with MULTICS-- and then wrote UNIX from scratch. Completely different from Linus, who says he started with nothing and had no experience. Another reason this is interesting is because the Ritchie, Thompson kernel was 11,000 lines of code over a number of years, and the Torvalds kernel was 32,000 in under a year.

Another problem with Tanenbaum’s logic is that he only presents examples of people that were Unix licensees, had Unix source code, or who were exceptionally familiar with software development. He cannot provide one example reasonably comparable to the Torvalds case.

Issue #2

Why do accounts continually assert that Torvalds "wrote Linux from scratch"?

Presumably, Professor Tanenbaum was not in Linus Torvalds's apartment at the time Linux was, to use a phrase recently (but only recently) disclaimed by Torvalds, "invented." Yet Tanenbaum vehemently insists that Torvalds wrote Linux from scratch, which means from a blank computer screen to most people. No books, no resources, no notes -- certainly not a line of source code to borrow from, or to be tempted to borrow from. But in a number of interviews AdTI completed with various individuals about operating system development, almost everyone reported that it is highly unlikely that even a pure genius could start from a blank computer screen and write the early Linux kernel. Suppose he could, would he?

In fact, everyone reported to me the opposite, that it only makes perfect sense to start with someone’s code, or framework, which is the common practice among programmers.

Furthermore in almost every interview with experienced computer science professionals, almost all said that they personally had a copy of the Lions notes, an illegal distribution of Unix source code. Even Tanenbaum admits to teaching from the Lions notes. Linus says he started with nothing. In a recent ZDNet interview(6), he denies having the Lions notes. This is also unbelievable to AdTI. The story is too amazing----everybody that I met knew Linus intimately enough to confirm he wrote the kernel from scratch--- had an illegal copy of the Lions notes---- but Torvalds, was never---even near the Lions notes.

Meanwhile, an associate of mine asked Richard Stallman, who started with the Mach Kernel, why his GNU team could not build a kernel as fast as Torvalds. Mr. Stallman provided AdTI with a credible, believable set of reasons why building a kernel was not a simple task. We thank Mr. Stallman for his forthrightness and honesty. We included this interview to provide another perspective for readers to understand the magnitude of the Torvalds story. To accept the Torvalds story, Torvalds would also have been light years ahead of a team that built the very compiler he needed to make the kernel work.

We also included this interview to resonate the character of Mr. Torvalds. The GNU team contributed their GCC compiler, a complicated product with over 110,000 lines of code to the Linux project. Without the compiler, it is very likely that the Linux project would not have succeeded. The GNU team only asked that the product be called GNU/Linux, a very simple request for helping to make him famous. But Torvalds silently, but deliberately let the naming idea die.

Question #3

If Linux was based on Minix, doesn’t it owe rights, attribution to Prentice Hall? Does it owe attribution or rights to anyone else?

How much ‘inspiration’ did Linus get from Minix? AdTI argues clearly enough to credit the Prentice Hall product. Not in conversation either, but within the copyright and/or the credits files of the kernel. Quite noticeably, however, there is not one acknowledgement of Minix anywhere in the Linux kernel. Almost daily, we receive new contradictions from people on this point. In a published interview between Eric Raymond and Linus Torvalds, Raymond brandishes how Torvalds basically derived Linux from Minix. But recently in a ZDNet interview last month, Torvalds insisted that he didn't start with Minix, but did get ideas from Unix(7).

What is anybody suppose to believe?

The larger issue is that Minix was a copyrighted product, for academic use only. The Minix license insisted from 1987 to 2000 that any commercial use of Minix for any reason, required permission of Prentice Hall. The Linux kernel was released in Fall 1991, well within the Prentice Hall proprietary license period. On the point of the license issue, Tanenbaum would just nervously repeat that he succeeded in getting Prentice Hall to change the license to BSD, so the topic was irrelevant. AdTI asks readers to ask why? Why did the license issue matter to Tanenbaum?

Tanenbaum insists that we are wrong to bring any of this up, but ironically, he comments on his site, “…but Linus' sloppiness about attribution is no reason to assert that Linus didn't write Linux(8).” AdTI is not suggesting that readers believe that Prentice Hall is going to sue. The point of the paper is to magnify potential problems associated with this type of software development. AdTI insists that development such as this is an accident waiting to happen; something that will seriously impact both Linux users and developers. For example, in the case of Minix/Linux, AdTI argues that hypothetically, a copyright infringement case could easily erupt, if someone was determined to prove that Linux was an unauthorized derivative product of Minix.

The final reason why AdTI decided to focus on this issue is because we learned that in fact, Prentice Hall took all of this very seriously and had previously sued a programmer for unauthorized development of Minix.

Follow Up With Torvalds

AdTI contacted Torvalds employer OSDL to interview him for clarification. Without any facts, Tanenbaum goes as far to post that AdTI did not try to contact Linus, but this is contradicted by the attached post. The OSDL contact person tells AdTI that if Linus doesn’t get back to us, he is not interested in being interviewed. AdTI has no problem publishing a report, whether sources do, or do not want to talk with us.

For years, Linus is credited with being an inventor. AdTI argued the claim was false. Coincidently in a recent interview, Linus decided he was not the inventor of Linux commenting in a ZDNet story, "I'd agree that 'inventor' is not necessarily the right word…(9)"

Other Issues

AdTI publishes its work for all audiences. It is written so that even if a group of elementary school children asked Tanenbaum the same questions AdTI did, they would see the very contradictions we reported.

Vrije University is a very cool place. AdTI encourages anyone that spends any time in Amsterdam to visit. At the good professor’s recommendation, AdTI spent a number of hours talking with Vrije university computer science faculty. They were great fun and extremely helpful. For that, we are also very grateful.

Professor Tanenbaum did not convince AdTI that Linus Torvalds wrote the Linux kernel from scratch. We are sorry if this has caused any inconvenience to Professor Tanenbaum or anyone else.

Summary

There is far too much boasting about stealing, reverse engineering, and illegal copying espoused by some within the open source community. If the theft of the Lions notes had not become such a banner waving incident, our research team probably would have never been inspired to write Samizdat. The purpose of Samizdat is to demonstrate how and why the hybrid model encourages these types of activities.

AdTI argues the best way to solve this problem is to create a more substantive pool of true, free open source code. For example, Vrije University would be an excellent candidate for research and development dollars to produce more open source. To this day, Linux is siphoning resources from proprietary software companies. Encouraging this activity would be a significant mistake for the U.S. government.

Unix is one of the greatest achievements in the history of computer science. Like other great inventions, the existence of a robust intellectual property model enabled Unix investors, developers, and users to reap significant rewards. We should support both invention and innovation. However, building a product that starts with the accomplishment of others and announcing it as completely your own work product, is not invention, nor is it innovation. Innovation can only work properly if innovators properly credit the work of others, especially if the innovator has decided to introduce the product into the marketplace for commercial gain. Nevertheless, AdTI concludes that U.S. Government investment in true open source development would significantly bolster the IT industry sector; and conversely, investment in hybrid open source will deteriorate it.

Kenneth Brown is president of the Alexis de Tocqueville Institution and director of its technology research programs. He is the author of numerous research papers and popular articles on technology issues, including the 2002 report, "Opening the open-source debate," one of the first papers to raise serious questions about the security of open- and hybrid-source computer software, a point recently raised by the president of Symantec Corporation. He is reportedly "not the sharpest knife in the drawer," but nevertheless is able to converse with many intelligent people, and is accepted at fine restaurants and hotels around the world.




1. Open Source Development Laboratory, www.osdl.org
2. Under the enhanced kernel submission process, contributions to the Linux kernel may only be made by individuals who acknowledge their right to make the contribution under an appropriate open-source license. The acknowledgement, called the Developer's Certificate of Origin (DCO), tracks contributions and contributors. http://www.technewsworld.com/story/33961.html
3. http://trends.newsforge.com/trends/04/05/17/1932214.shtmlCommentary: If only Cisco code had been open source, May 17, 2004
4. Tanenbaum, Andrew. Interview with AdTI. March 8, 2004.
5. http://www.cs.vu.nl/~ast/brown/followup/
6. http://zdnet.com.com/2100-1104_2-5216651.html
7. ibid
8. http://www.cs.vu.nl/~ast/brown/
9. http://zdnet.com.com/2100-1104_2-5216651.html


Get Firefox!