QODS ec

Sunday, August 15, 2004

SEC: CRYPTO-GRAM, August 15, 2004

Gmail - CRYPTO-GRAM, August 15, 2004: " CRYPTO-GRAM August 15, 2004 by Bruce Schneier Founder and CTO Counterpane Internet Security, Inc. schneier@counterpane.com
CRYPTO-GRAM

August 15, 2004

by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@counterpane.com



A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.

Back issues are available at
. To subscribe, visit
or send a blank message to
crypto-gram-subscribe@chaparraltree.com.

Crypto-Gram also has an RSS feed at
.

** *** ***** ******* *********** *************

In this issue:
BOB on Board
Crypto-Gram Reprints
Alibis and the Kindness of Strangers
News
Security Notes from All Over: GHB
Counterpane News
Houston Airport Rangers
Websites, Passwords, and Consumers
Comments from Readers

** *** ***** ******* *********** *************

BOB on Board

Last Tuesday's bomb scare contains valuable security lessons, both good
and bad, about how to achieve security in these dangerous
times. Ninety minutes after taking off from Sydney Airport, a flight
attendant on a United Airlines flight bound for Los Angeles found an
airsickness bag -- presumably unused -- in a lavatory with the letters
"BOB" written on it. The flight attendant decided that the letters
stood for "Bomb On Board" and immediately alerted the captain, who
decided the risk was serious enough to turn the plane around and land
back in Sydney.

Even a moment's reflection is enough to realize that this is an extreme
overreaction to a nonexistent threat. "Bob" is common flight attendant
jargon for "babe on board" or "best on board," as in: "Look at that Bob
in seat 7A." United Airlines apparently also uses it for some domestic
U.S. flights to mean "Buy on Board" -- meals aren't provided gratis,
but if you want one you must buy it. And even if it weren't, there's
absolutely no reason to think that "BOB" is not just someone's name,
written on the airsickness bag sometime in the past and left in the
lavatory by a passenger who didn't even realize it. Why in the world
would someone decide that out of all the possible meanings that "BOB"
scribbled on an airsickness bag could have, its presence on this
particular airsickness bag on this particular flight must mean "Bomb On
Board"?

And why would the captain concur?

Security works best when people are in charge. I am comforted that the
final decision to divert the flight was in the hands of the captain,
and not a United Airlines executive who might unduly worry about the
$100,000 the emergency landing ended up costing. The captain is in
charge of the plane, and is the best person to weigh the risk to the
lives of the passengers -- and his own -- against the inconvenience of
diverting the aircraft.

More and more our security systems are run by computers and unalterable
policies, turning the people at the front lines of security into mere
drones. Computers now choose who to search carefully at airport
security. Smart guards in lobbies have been replaced by less-skilled
employees who mindlessly check photo IDs. This story serves as a
counter-example, and demonstrates the correct way to design a security
system.

However: if we are to expect airplane captains and flight attendants to
make important security decisions, they need to be properly
trained. The flight attendant who discovered the airsickness bag
didn't react from reason, but from fear. And that fear was transferred
to the captain, who made a bad decision.

Fear won't make anyone more secure. It causes overreactions to false
alarms. It entices us to spend ever-increasing amounts of money, and
give away ever-increasing civil liberties, while receiving no security
in return. It blinds us to the real threats.

Speaking about the person who wrote those three fateful letters on the
airsickness bag, Transport Minister John Anderson called him
"irresponsible at the least and horrendously selfish and stupid at the
worst." Irresponsible for what? For writing his name? For
perpetuating common flight-attendant slang? It wasn't the writer who
did anything wrong; it was those who reacted to the writing.

We live in scary times, and it's easy to let fear overtake our powers
of reason. But precisely because these are scary times, it's important
that we not let them. Prime Minister John Howard praised the crew for
their quick reactions, diligence, and observation skills. I'm sorry,
but I see no evidence of any of that. All I see are people who have
been thrust into an important security role reacting from fear, because
they have not been properly trained in how to sensibly evaluate
security situations: the risks, the countermeasures, and the
trade-offs. Were cooler and more sensible heads in the cockpit, this
story would have had a different ending.

Unfortunately, fear begets more fear, and creates a climate where we
terrorize ourselves. Now every wacko in the world knows that all he
needs to do to ground an international flight is to write "BOB" on an
airsickness bag. Somehow, I don't think that's the outcome any of us
wanted.


,00.html> or

This essay originally appeared in the Sydney Morning Herald:


** *** ***** ******* *********** *************

Alibis and the Kindness of Strangers

In 1997, cryptographers Mike Reiter and Matthew Franklin invented a
type of cryptographic protocol that relies on what they called a
"semi-trusted third party." This meant that "it may misbehave on its
own but does not conspire with either of the main parties." More
concretely, it could be a random node on a communications network. It
can't be trusted, but it doesn't know either of the communicating
parties and generally behaves more-or-less appropriately.

Protocols that involve semi-trusted third parties are more common than
you might think. When you ask someone sitting next to you in a public
place to watch your bag for a minute, you're relying on a semi-trusted
third party. When you use condiments at a restaurant that remain on
the table all day, you're relying on a semi-trusted third
party. Technically you can be robbed or poisoned, but in reality it's
not likely to happen. The kindness of strangers works.

Now you can use semi-trusted third parties to help with alibis. On the
Internet, you can find so-called "alibi and excuse clubs." They're
simple. As a member of the club -- membership can mean nothing more
than signing up for the mailing list -- you can ask other members to
help you with alibis. Maybe you want someone to pretend to be your
doctor and call your boss. Maybe you want someone to pretend to be
your boss and call your spouse at home. Maybe you want someone to
pretend to be your spouse and call your boss. Whatever you want, you
ask on the network and someone volunteers.

Of course inventing alibis, and engaging accomplices, is nothing
new. But the anonymity of the Internet gives this a new twist. Your
accomplice appears out of nowhere, and disappears just as quickly. He
may never even know your real name. Because of the ad-hoc nature of
the accomplice, and the anonymity of the system, it's actually safer
than asking a friend to participate in your ruse.

Of course, it's not without risks. You could imagine a legion of
do-gooders infiltrating the alibi clubs and deliberately exposing the
alibis. But my guess is that these networks generally work as advertised.

I'll leave the ethical commentary, and what this says about the morals
of our society, to someone else. I'm just fascinated by the security
implications.

News story:

It's archived here:


Cellphone "rescue" calls aid in excuse making:
escue_calls_3> or

** *** ***** ******* *********** *************

Crypto-Gram Reprints

Crypto-Gram is currently in its seventh year of publication. Back
issues cover a variety of security-related topics, and can all be found
on . These are a selection
of articles that appeared in this calendar month in other years.

Flying on Someone Else's Airplane Ticket:


Hidden Text in Computer Documents:


Palladium and the TCPA:


Arming Airplane Pilots:


Code Red:


Protecting Copyright in the Digital World:


Vulnerabilities, Publicity, and Virus-Based Fixes:


Bluetooth:


A Hardware DES Cracker:


Biometrics: Truths and Fictions:


Back Orifice 2000:


Web-Based Encrypted E-Mail:


** *** ***** ******* *********** *************

News

Last month I published a link to a pair of articles on torture from
Salon. This article, from the Oct 2003 Atlantic, is much better. It's
more intelligent, more balanced, more factual...meatier. It was also
written before any details of Abu Ghraib reached the public.


If it seems that I spend so much time writing about stupid security,
it's because I so rarely see smart security. This is smart
security. VISA sat down and designed some strict, but sensible, design
standards for PIN-entry devices on ATMs. The analysis includes
specific measurements of the effort an attacker must expend to defeat
the devices. I'd like to think that they did all this because they
read my books...but I don't really know.


New criminal penalties in the U.S. for identity theft:
77.html> or

A group of hackers is selling confidential source code:
69787.html> or
For example, the Enterasys intrusion detection code costs $16K:

No they're not; the business has shut itself down:

Yes they are; they're back in business:


Yet another article that proves companies are very concerned about
security:

Isn't it interesting how that concern never seems to translate into budget?

The Attorney General of Mexico had a RFID microchip implanted in his
arm. It's designed both as an access-control token, and to track him
if he's ever kidnapped.

r=latin_america> or

What's odd about this story, at least to me, is how the chip is
designed as "non-removable" and how that feature will help in the event
of a kidnapping. Maybe it's just me, but I think that a criminal of
the caliber that would successfully kidnap the attorney general of
Mexico would not feel queasy about cutting off his arm. This is one
security measure that is best kept secret.

Here's an interesting hardware security vulnerability. Turns out that
it's possible to update the AMD K8 processor (Athlon64 or Opteron)
microcode. And, get this, there's no authentication check. So it's
possible that an attacker who has access to a machine can backdoor the CPU.
7&Thread=1&entryID=35446&roomID=11> or

Another article on financial metrics to justify computer security
expenditures:


E-mail security mistakes:
7,00.html> or

There's a fundamental trade-off between security and
compatibility. You can fix an application to make it secure, but often
in doing so you break backwards compatibility. Microsoft is choosing
security over backwards compatibility in XP SP2, and some vendors are
angry. I think Microsoft is making the right decision.

63> or

The U.S. Government is officially withdrawing DES as an encryption
standard:


Turning a cell phone into a listening device: "These days, a boardroom
Mata Hari can purchase a specially designed cell phone that will answer
incoming calls while appearing to be switched off. In a business
meeting, she could casually leave her phone on the table while excusing
herself to go to the bathroom. Once she's gone, she can call the phone
she left behind and eavesdrop on what the other side is saying in her
absence." I can't find any corroboration of this, and would appreciate
any and all leads.


Although if this kind of thing worries you, or if you want to eat in
peace and quiet at a restaurant, you can buy a cell phone jammer. Note
that these are illegal in the U.S., even though you can buy devices
that jam U.S. cell phone frequencies.


Dilbert, security guards, and photo-IDs:


Here's a guy who has a webcam pointing at his SecurID token, so he
doesn't have to remember to carry it around. Here's the strange thing:
unless you know who the webpage belongs to, it's still good security.


According to their website, "the Central Intelligence Agency is
committed to protecting your privacy and will collect no personal
information about you unless you choose to provide that information to
us." Um, isn't their job to collect personal information about people
without their permission?


An e-mail money laundering scam. You register your account with the
scammers, who dump money into it from other accounts they've broken
into. You're job is to transfer the money to the scammers via some
untraceable means, and you get to keep a percentage as your reward. At
least, you get to keep it until the original account holders notice
that money has been transferred into your account without their
permission. Then, the FBI comes knocking on your door.
atid=&topic=5> or

Seems like banks are doing their best to get out of having to reimburse
customers who are the victims of phishing attacks.

This shouldn't be a difficult one. Yes, it's the customers who fell
for the scams. But it's the banks who set up the easily-bypassible
security systems in the first place, and it's the banks who can fix the
problem. Just as the U.S. government limited personal liability from a
stolen credit card to $50, they should do something similar with
Internet accounts.

The effects of streetlamps on security. Of course, this website is
pro-astronomy and anti-light-pollution. But it's interesting to see
the case that increased lighting does not affect crime rates.


Bluetooth hacking. They discuss a device called a BlueSniper rifle,"
basically a directional antenna. It's a proof-of-concept device, but
someone with it sat outside a hotel, aimed it at an 11th story window,
and collected 300 phone books from Bluetooth devices. It also broke
the distance record, attacking a Nokia 6310i phone 1.1 miles away and
grabbing the phone book and text messages.


Someone has built a 802.11 wireless packet injection tool. He can cut
in on someone's wireless Internet connection and modify packets: him
fake email, webpages, whatever. Think it over for a minute; someone
can do an amazing amount of damage with this kind of thing.


There's an interesting defense against identity theft. It's called a
"security freeze," and it lets individuals block access to their credit
reports until they personally unlock the files by contacting the credit
bureaus and providing a PIN code. Of course the credit unions are
blocking this: it's more work for them, and they don't bear the costs
of identity theft. This is a great example of an organization blocking
a security solution because they don't have a financial interest in the
problem.


There's a new phishing scam website, and it mimics a Kerry for
President contribution site:
,10801,95030,00.html> or


Bush and Kerry were in the same Iowa small town. The goings on taxed
the police to their limit, and the opportunity was seized by bank
robbers. Three banks were robbed during the day.
http://news.bbc.co.uk/1/hi/world/americas/3533478.stm
This also occurred in the Simpsons episode "Marge vs. the Monorail," in
which burglars break into people's homes while everyone's off at a town
meeting.

It's a joke site, and worth a visit:


The lead reads: "Five shipping containers of lemons rotted on a ship
held off New York all week after officials received a false tip amid
heightened security that the cargo might be biologically
contaminated...." This leads me to some interesting thinking. How can
unscrupulous people use anonymous terrorism tips to wreak havoc with
business competitors, or just companies they want to get back at?
911191&src=rss/oddlyEnoughNews§ion=news> or

This USB/Firewire portable encrypted hard drive looks cool. Everything
is automatically encrypted when it's stored on the drive, and decrypted
when it's read off the drive. It has a physical key that holds the
encryption key, and without it the data is inaccessible. Given all
this coolness, and the fact that the encryption is triple-DES, why in
the world did they limit the key to 64 bits? A longer key is no
slower. Export is no longer an issue. Sad, really.


The Bush Administration is pressuring hospitals to ask patients about
their immigration status before they treat them. This policy is likely
to result in fewer people being treated for illnesses, and is a bad
idea from a public health perspective. This line of reasoning reminds
me a lot of Microsoft's decision to restrict SP2 to registered Windows
users. In an essay, I argured that we would all be secure if everyone
upgraded to SP2...not just registered users. "...we should care for
reasons of public health. You don't want someone with untreated
Hepatitis A in the restaurant kitchen, making your salad. You don't
want a child with ringworm or an untreated infection playing with your
child on the playground. You don't want there to be a population in
your city which is undervaccinated, living under poor sanitary
conditions, and without prompt access to health care. That's how
epidemics start."
chive.html#109231660004818754> or

** *** ***** ******* *********** *************

Security Notes from All Over: GHB

GHB is gamma hydroxybutyric acid; a date rape drug. An attacker
(presumably male) slips the drug into a woman's drink, and then rapes
her after the effects of the drug set in. Not a common attack -- there
are fewer than 40 reported cases in the U.S. each year -- but horrible
when it happens. (To be fair, this number is widely believed to be an
underestimate, but it seems clear that it's a small fraction of all rapes.)

One suggested countermeasure is that women carry their own bottle
opener into a bar, and make sure that no one else handles their opened
drink. The general principle illustrated here is that of a trusted
third party. If a woman opens the beer bottle herself, then she is not
forced to trust anyone in the bar. (She has to trust the beer maker,
but that doesn't seem like a big risk.)

I've written again and again that security is a trade-off: what you get
versus what you give up. There are security countermeasures that are
bad trade-offs even though they're effective: an example for most of us
reading this, is wearing a bullet-proof vest. There are security
countermeasures that are good trade-offs: national intelligence. And
there are many, many security countermeasures that just aren't worth it.

As with the threat of drugs or razors in Halloween candy (which, unlike
GHB, is almost completely phony), risk assessment is often based on
scariness rather than prevalence. That is, people are having an
emotional reaction to the threat rather than a realistic one. And they
end up with a countermeasure that makes no sense from a security
analysis perspective, but a lot of sense from an emotional analysis
perspective.

Sure, carrying a bottle opener is easy. But the constant vigilance
that this countermeasure requires is not. And someone so focused on
this countermeasure is more likely to ignore other threats.

There are 5,000 deaths every year from food-borne illnesses, but nobody
refuses to take unwrapped food from restaurants, or insists on
inspecting the kitchen and watching their food being prepared.

The rare and spectacular always seems more dangerous than the common
and pedestrian, and we end up with a lot of security theater because of it.



There are also test strips for GHB, which apparently aren't very accurate:


** *** ***** ******* *********** *************

Counterpane News

There's an interview with Schneier on the Netcraft website:
ier_counterpane_internet_security.html> or

Counterpane had an excellent 2nd quarter:


And the analyst group Gartner has again recognized the company's
leadership in the Managed Security Monitoring space:


** *** ***** ******* *********** *************

Houston Airport Rangers

Want to help fight terrorism? Want to be able to stop and detain
suspicious characters? Or do you just want to ride your horse on ten
miles of trails normally closed to the public? Then you might want to
join the George Bush Intercontinental (IAH) Airport Rangers
program. That's right. Just fill out a form and undergo a background
check, and you too can become a front-line fighter as Houston's airport
tries to keep our nation safe and secure. No experience
necessary. You don't even have to be a U.S. citizen.

No, it's not a joke. The Airport Rangers program is intended to
promote both security and community participation, according to the
official description. It's a volunteer mounted patrol that rides
horses along the pristine wooded trails that form the perimeter of the
11,000-acre airport.

Security is far more effective when it's based on well-trained smart
people, instead of on rote-trained people checking photo IDs and X-ray
machine screens, or implementing database-driven profiling. The idea
of trained guards patrolling a secure perimeter is a good one. But as
a security professional, I see two major problems with the program as
described.

The first is the lack of training. The program encourages "licensed
law enforcement officers" to participate, but that's not a requirement
-- anyone can be a Ranger. As best I can glean from the Web page, the
training consists of a "short video" on suspicious activities. Is
there any mention of civil rights and constitutional protections? Is
there any attempt to prevent racial profiling? Profiling has been a
problem even for major law enforcement agencies; how will a group of
untrained civilians perform? And what are the liabilities to the
airport when there are problems?

The second is the new security vulnerability that this program
creates. The perimeter around the airport used to be a no-man's-land;
anyone on the property was immediately suspicious. Now there is a
group of people allowed around the airport perimeter. How do you tell
the difference between someone who is allowed and someone who isn't? A
photo ID, one you might glance at from ten feet away, is easily
forgeable. And since all Rangers are on horseback, if you have a horse
and you're Western-looking, you probably are going to be automatically
trusted. Is the airport safer, or more at risk, because of this
program? The answer isn't obvious.

Beyond these two points, the application form makes for interesting
reading. In order to participate in the program, you have to waive all
sorts of rights. You waive the right to challenge the arbitrary denial
of one of these permits. That may be compensation for another glaring
risk of this scheme: are the background checks good enough to exclude
potential terrorists? Is the intent that the agency will do its own
profiling, and exclude, for example, Muslims? A more charitable
explanation is that they want to be able to rely on intelligence
reports without having to disclose them.

The most amusing part is the required certification. Applicants must
certify that they are not members of known terrorist
organizations. This makes sense, although expecting terrorists to tell
the truth about their affiliations is a tad naive. But why exclude
people who have "claims or litigation pending against the City of
Houston or the Houston Airport System"? Does this exclude people
fighting parking tickets?

Finally, applicants must certify that they're not a member of any group
that "advocates violence against ... any other nation." A year and a
half ago, that would have excluded all members of both the Democratic
and Republican party, as well as any other political party that favored
invading Iraq.



This essay originally appeared in The Register:


** *** ***** ******* *********** *************

Websites, Passwords, and Consumers

Criminals follow the money. Today, more and more money is on the
Internet. Millions of people manage their bank accounts, PayPal
accounts, stock portfolios, or other payment accounts online. It's a
tempting target: if a criminal can gain access to one of these
accounts, he can steal money.

And almost all these accounts are protected only by passwords.

If you're reading this essay, you probably already know that passwords
are insecure. In my book "Secrets and Lies" (way back in 2000), I
wrote: "Over the past several decades, Moore's Law has made it
possible to brute-force larger and larger entropy keys. At the same
time, there is a maximum to the entropy that the average computer user
(or even the above-average computer user) is willing to
remember.... These two numbers have crossed; password crackers can now
break anything that you can reasonably expect a user to memorize."

On the Internet, password security is actually much better than that,
because dictionary attacks work best offline. It's one thing to test
every possible key on your own computer when you have the actual
ciphertext, but it's a much slower process when you have to do it
remotely across the Internet. And if the website is halfway clever,
it'll shut down an account if there are too many -- 5?, 10? --
incorrect password attempts in a row. If you shut accounts down soon
enough, you can even make four-digit PINs work on websites.

This is why the criminals have taken to stealing passwords instead.

Phishing is now a very popular attack, and it's amazingly
effective. Think about how the attack works. You get an e-mail from
your bank. It has a plausible message body, and contains a URL that
looks like it's from your bank. You click on it and up pops your bank
website. When asked for your username and password, you type it
in. Okay, maybe you or I are aware enough not to type it in. But the
average home banking customer doesn't stand a chance against this kind
of social engineering attack.

And in June 2004, a Trojan horse appeared that captured passwords. It
looked like an image file, but it was actually an executable that
installed an add-on to Internet Explorer. That add-on monitored and
recorded outbound connections to the websites of several dozen major
financial institutions and then sent usernames and passwords to a
computer in Russia. Using SSL didn't help; the Trojan monitored
keystrokes before they were encrypted.

The computer security industry has several solutions that are better
than passwords: secure tokens that provide one-time passwords,
biometric readers, etc. But issuing hardware to millions of electronic
banking customers is prohibitively expensive, both in initial cost and
in customer support. And customers hate these systems. If you're a
bank, the last thing you want to do is to annoy your customers.

But having money stolen out of your account is even more annoying, and
banks are increasingly fielding calls from customer victims. Even
though the security problem has nothing to do with the bank, even
though the customer is the one who made the security mistake, banks are
having to make good on the customers' losses. It's one of the most
important lessons of Internet security: sometimes your biggest security
problems are ones that you have no control over.

The problem is serious. In a May survey report, Gartner estimated that
about 3 million Americans have fallen victim to phishing
attacks. "Direct losses from identity theft fraud against phishing
attack victims -- including new-account, checking account and credit
card account fraud -- cost U.S. banks and credit card issuers about
$1.2 billion last year" (in 2003). Keyboard sniffers and Trojans will
help make this number even greater in 2004.

Even if financial institutions reimburse customers, the inevitable
result is that people will begin to distrust the Internet. The average
Internet user doesn't understand security; he thinks that a gold lock
icon in the lower-right-hand corner of his browser means that he's
secure. If it doesn't -- and we all know that it doesn't -- he'll stop
using Internet financial websites and applications.

The solutions are not easy. The never-ending stream of Windows
vulnerabilities limits the effectiveness of any customer-based software
solution -- digital certificates, plug-ins, and so on -- and the ease
with which malicious software can run on Windows limits the
effectiveness of other solutions. Point solutions might force
attackers to change tactics, but won't solve the underlying
insecurities. Computer security is an arms race, and money creates
very motivated attackers. Unsolved, this type of security problem can
change the way people interact with the Internet. It'll prove that the
naysayers were right all along, that the Internet isn't safe for
electronic commerce.

Phishing:

0149> or

The Trojan:
/2100-7349_3-5251981.html> or


A shorter version of this essay originally appeared in IEEE Security
and Privacy:


** *** ***** ******* *********** *************

Comments from Readers

From: Daniel Staal
Subject: Due Process and Security

"Unchecked police and military power is a security threat -- just as
important a threat as unchecked terrorism. There is no reason to
sacrifice the former to obtain the latter, and there are very good
reasons not to."

I just wanted to expand on this statement, as I feel the checks on
police and military power are *more* important than checks on terrorism.

Terrorism exists because people don't think they are being heard by the
establishments in power: it is a way to *make* their voice heard. (A
bad one, but easy to use.) The US has historically been nearly immune
to terrorist attacks precisely because of its checks on police,
military, and political power. Because of those checks, terrorism
*isn't needed* in the US. A voice can be heard without it. (Usually.)

The loss of those checks, and the erosion of rights, will do nothing
but encourage terrorism, because it will become clear that there are
few (and only harder) ways to make an unpopular voice heard. In those
circumstances, a terror attack is an emotionally satisfying way to make
your voice heard.

If the government cannot be trusted to listen to people, it will be
made to. The U.S. government just happens to have influence over more
than its own citizens, so it has to respect the voice of
more. (Respect does not mean acquiesce to. Just listen, understand,
and take into consideration.) Doing so will make the country
safer. Doing otherwise will make the country less safe.

America has proved that freedom *is* safety, through its own
record. (Check terrorism/insurrection/revolt attacks for nearly any
comparable nation, status-wise, in history. America has fewer.) It is
sad to see so much of the country start believing the idea that they
are in opposition to each other.

Unchecked police and military power undermine personal security, and
increase the risk of terrorism attacks. Unchecked terrorism only
increases the risk of terrorism attacks. Statistically, if both are
possible, I am more at risk from the first, and am likely to lose more.

From: Jeff Evarts
Subject: Security and Portable Storage Devices

When I look at mobile phones, iPods, and the plethora of other
semi-intelligent, connective, storage-capable objects, I see them not
as a source of intentional data theft, but as vehicles for black-hats
to penetrate organizations. Compromise the guy's Bluetooth phone
(remotely), unwitting victim goes to work, compromised phone grabs lots
of Bluetooth-accessible data, then disgorges it later (remotely), all
without the employee being aware of the problem. The prohibited device
is viewed as a vector for trouble, not a direct tool for
malfeasance. In essence, the prohibition is kind of like the body-heat
scanners in SARS-vulnerable Asia keeping employees with fevers from
going to work. It's the second-order problem, not the employee
willingly taking data, that the security-minded folks I know are
worried about.

From: Eric Vanhove
Subject: X-Ray Machines and Building Security

Your comments on the "inadequate" security at "FinCorp" were
interesting but missed one of your own key points. You do need to
understand the context behind the decision. First, one basic tenet of
security is to make the attacker (terrorist, criminal, etc) believe
that you are a harder target than the next guy. If you can do this --
convince the attacker to go somewhere else -- by merely installing a
metal detector, hiring some minimum-wage folks to look at a TV screen,
and walking a bomb-detecting dog around the building a few times a
week, then your first level of security has been successful. That is
why the insurance company is willing to give FinCorp the discount. It
is not much different than what auto insurers do with car security; you
get a discount with even the most basic car alarm that (a) most people
ignore when it alarms and (b) wouldn't give the professional car thief
a moment of concern. Second, there is always some risk assumed in any
decision. A more effective security system might include five or six
entry barriers where various levels of searches, possibly including a
strip search, were conducted. But this (for many reasons) might be
cost prohibitive, and one of those reasons might include no one wanting
to do business with FinCorp. No decision is taken in a vacuum, and
FinCorp made a business decision that assumed a level of risk.

You found flaws in their system. If you were a bad guy, you'd put
FinCorp on your hit list a little higher than some other company
without the metal detector and sniffer dogs. But when you made your
decision to target someone you might use this as a discriminating
factor to go somewhere else.

From: Owen Yamauchi
Subject: ICS Atlanta

There are several more features of ICS's website that make one want to
run screaming from them. Here are some of the scarier ones I found:

- "Due To The Nature of the program, reverse engineering, we do not
have a demo/Trial version available for download. By Doing This, We
Help Keep Your Code Even More Secure." That makes me feel very safe
using Tree. After all, since these people are such expert security
professionals, it's virtually impossible that the code for Tree would
get leaked, isn't it?

- "While We Aren't Attorneys, From What We Understand About The
Law,..." These guys just radiate professionalism and confidence, don't
they?

- "Tree Encodes The Same Data Differently Each Time It Is
Encoded." They wouldn't be just encrypting under a random key, and
embedding the key in the ciphertext, now would they? Because that
would just be stupid!

- "Tree is a file encryption/decryption program designed to foil all
current methods of 'snooping' of private data by the very means of how
Tree encodes data." Yup. They do it with a secret algorithm, and most
current methods of cryptanalyzing ciphertext involve knowledge of the
algorithm. You can't even use brute force, since Tree doesn't have a
key to brute-force! Wow!

- "Do Any Governmental Agencies Have This Program? As Of 5/2004, The
Answer Is No." You'd think government agencies would recognize the
value of the world's safest encryption, wouldn't you? Silly them.

- "Our method is based on techniques and methods that were used by
people such as the 'code talkers' of WW2, or the language construction
used by the Egyptians." So it's probably a codebook. But they
probably keep that in a Tree-encrypted file on their network, so I feel
confident that no cipherpunk will be able to get hold of it.

All kidding aside, if they're trying to scam people, you'd think they'd
put more effort into their website. Why is practically every word in
the second part capitalized? There are grammatical and spelling
errors. The screenshots are of awful quality. From the screenshots,
the program also has the same odd capitalization and language errors
that the website has.

If that isn't snake oil, then there is no such thing as snake oil.

From: "Ken Lavender"
Subject: ICS Atlanta

I am APPAULED at your "comments" that you had made on your website:



You have statements are nothing but slander & defamation. They shall
be dealt with accordingly.

Lie #1: "How do they demonstrate Tree's security? 'Over 100
professionals in mathematics & in computer science at Massachusetts
Institute of Technology & at Georgia Tech, had sample encoded messages
submitted to them. Not a single person could break this code!'" That
is not the ONLY way we prove it. We have examples & offer to allow
people to submit their OWN messages to have encoded to SEE how good the
code is. So there are THREE methods, NOT just ONE as you IMPLY.

Lie #2: "These guys sent unsolicited e-mails..." HOW do you KNOW that
this was the case? Have any PROOF of such? NO!

Lie #3: "And if all that isn't enough to make you run screaming from
these guys, their website proudly proclaims: 'Tree Encoded Files Can Be
"Zipped."'" Because they can be "zipped" does NOT mean that it is "bad
encoding." The "code talkers" of ww2 used LANGUAGE to "code" the
messages, and THOSE COULD BE "ZIPPED"!!! And that code was NEVER BROKEN!!!

Lie #4: "That's right; their encryption is so lousy that the
ciphertext doesn't even look random." AGAIN, HOW would you
KNOW??? Did you break it? NO! And what is "random"???

random : without definite aim, direction, rule, or method

"So lousy"? HOW WOULD YOU KNOW??? You would have to KNOW how we
encode BEFORE you can make such a statement, & YOU DO NOT KNOW
HOW!!! If it is SO LOUSY, how come NOBODY HAS BROKEN IT YET??? And we
have people ALL THE TIME trying to, with ZERO SUCCESS.

I do not like you slandering something that you do not
understand. ATALL!!!

The ONLY question you asked was "how long is the key" AND THAT WAS
IT! HOW long was the key that the 'code talkers' used? ZERO!!! JUST AS
OUR IS. The encoding routine was created, tested, & verified on PAPER
& PENCIL WITHOUT COMPUTERS! A child could encode data using our
routine. The computer is merely used to "speed-up" the process, NOT TO
CREATE IT. Our routine is based on LANGUAGE, NOT MATH. So all of you
"comments" are just false, misleading & just plain ole lies! SHOW &
PROVE that it is NOT random. What is the PATTERN THEN???

I am DEMANDING A FULL RETRACTION OF YOUR COMMENTS & A FULL, COMPLETE
APOLOGY TO THESE AND ALL STATEMENTS.

I am a person who tries to work with people as a man w/o having to
"drag" others into the mess. Others? THE COURTS. You have violated
Calf law by your statements.

[Text of California Civil Code Section 46 deleted.]

Your LIES have damaged my respect in my job & has damaged any sales of
this routine. You have ZERO proof of your "comments," ANY OF
THEM!!! I beseech of you, do the RIGHT THING and comply. I DO NOT
wish to escalate this matter any higher. And remember this, Tree is
based on LANGUAGE, NOT MATH!!!!!!!!!!!!!!!!!

[Phone number deleted out of mercy.]

** *** ***** ******* *********** *************

CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses,
insights, and commentaries on security: computer and otherwise. Back
issues are available on .

To subscribe, visit or send
a blank message to crypto-gram-subscribe@chaparraltree.com. To
unsubscribe, visit .

Comments on CRYPTO-GRAM should be sent to
schneier@counterpane.com. Permission to print comments is assumed
unless otherwise stated. Comments may be edited for length and clarity.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who
will find it valuable. Permission is granted to reprint CRYPTO-GRAM,
as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of
the best sellers "Beyond Fear," "Secrets and Lies," and "Applied
Cryptography," and an inventor of the Blowfish and Twofish
algorithms. He is founder and CTO of Counterpane Internet Security
Inc., and is a member of the Advisory Board of the Electronic Privacy
Information Center (EPIC). He is a frequent writer and lecturer on
security topics. See .

Counterpane Internet Security, Inc. is the world leader in Managed
Security Monitoring. Counterpane's expert security analysts protect
networks for Fortune 1000 companies world-wide. See
.

Copyright (c) 2004 by Bruce Schneier.

0 Comments:

Post a Comment

<< Home


Get Firefox!