Showing posts with label academia. Show all posts
Showing posts with label academia. Show all posts

Tuesday, February 7, 2012

Andrew Tanenbaum on Security vs Fun-Loving Students

... "some modicum of security was required to prevent fun-loving students from spoofing routers by sending them false routing information."

- Andrew S. Tanenbaum regarding OSPF in Computer Networks (4th ed.)

Monday, October 11, 2010

Does expiring passwords really help security?


Change is Easy
Originally uploaded by dawn_perry
I've heard a lot of arguments as to why expiring passwords likely won't help. Here's a few:

  • It's easy to install malware on a machine, so the new password will be sniffed just like the old.
  • It costs more: frequent password changes result in more forgotten passwords and support desk calls.
  • It irritates users, who will then feel less motivated to implement to other security measures.
  • Constantly forcing people to think of new, memorable passwords leads to cognitive shortcuts like password-Sep, password-Oct, password-Nov...
And yet, many organizations continue to force regular password changes in order to improve security. But what if that's not what's really happening? Three researchers from the University of North Carolina at Chapel Hill have unveiled what they claim to be the first large-scale study on password expiration, and they found it wanting.

They focus especially on the idea that consecutive passwords will be related, and build a system which could try a variety of transforms such as changing which letter was uppercase, duplicating letters/numbers/symbols, and even "leet" translation (eg: raven becomes r@v3n). The implications of their results are fairly clear and potentially disturbing for those who thought password changing was providing extra security in the case of a breach:

  • With offline attacks: "On average, roughly 41% of passwords can be broken from an old password in under 3 seconds."
  • With online attacks: "An average of 13% of accounts can be broken (with cer- tainty) in 5 online guesses, and 18% can be broken in 10 guesses."
  • "As we expand our consideration to other types of transform trees, we would not be surprised to see these success rates jump significantly."
In essence, they've shown that changing passwords doesn't provide nearly as much security as system designers had hoped, and they suggest we abandon the practice rather than continue to annoy users with a policy that has been proven ineffective.

Friday, July 9, 2010

Preparing some curricula on web security

Among the other cool things I'm doing this summer is working as a teaching assistant for 1.5 days worth of tutorials on the subject of web security. This is part of my national research group's "summer school" program where we try to give our graduate students more background into other areas of security. I'm working up a list of potential topics so we can get our teaching materials together.

So... What would you want to learn in a short course on web security? What do you wish other people knew about web security?

Here's my brainstorming list, to be updated as new things occur to me:

Attacks

Defenses

  • Best coding practices
  • Web Application Firewalls
  • Web Vulnerability Scanners
  • Tainting
  • Mashup solutions (e.g. MashupOS, OMash)
  • Policies (e.g. SOMA, BEEP, CSP)
  • Penetration testing techniques

Notes: The tentative plan is to separate things into a hands-on lab tutorial (probably using webgoat) and a set of lectures, mostly running simultaneously. We're going to have some top-notch students here, since we're drawing from a pool of smart security researchers to start, so we can cover a lot of ground and go much further in depth than we might teaching developers with no security background.

Saturday, May 15, 2010

Subverting Ajax

I write this on 9/15/08 but never published it for some reason. The paper I'm discussing is still interesting, though, so here's the post, years late!

Today's paper is Subverting Ajax which was published in December 2006 at the 23rd Chaos Communication Congress. It is, as one might expect from the title, an overview of ways in which Ajax (Asynchronous JavaScript And XML) can be compromised.

You might think that since this paper was from 2006, many of these flaws would be closed, but sadly, the paper seems to retain its relevancy even in 2008.

Although the focus of this paper is on Ajax, particularly the case in which an attacker has placed another layer of communication "between" the browser and the server, it also covers a number of techniques that can be used in any JavaScript based attack. For example, the wrapper used around the built-in XMLHttpRequest could potentially be used to subvert any built-in JavaScript object. Also clever is the use of proxies and iframes. To be honest, the attacks I've seen in the wild have not been this complex, but if we ever close the obvious holes we can expect that more subtle attacks would happen, and it's good to understand them in advance.

The one downside to this paper is that it is clear the the authors are not native English speakers, and I'm sorry to admit that there were places where I found their use of language distracting.

Overall, I'll have to recommend the paper, as it was recommended to me, but I have high hopes that owasp.org will produce easier to read documentation on Ajax-specific threats one of these days.

Tuesday, December 9, 2008

Spamalytics Show Spam Doesn't Pay


SPAM!
Originally uploaded by cursedthing
This is the second in my series of posts about talks I enjoyed at ACM CCS. The first was here.

As some of you may know, my master's thesis involved creation of a spam-detector based on the workings of the human immune system. Forgoing modesty, I'll say that my system was pretty cool (I even got slashdotted) but I couldn't see myself doing spam research forever -- there's only so many times you really want to stand up in front of a room full of academics and try not to make viagra jokes.

I digress. But when I saw the paper entitled "Spamalytics: An Empirical Analysis of Spam Marketing Conversion" on the program, I knew which track to choose for that session.

They wanted to get some numbers showing click-through rates on spam, to see how much money spammers really are making nowadays, and how many people were seeing those emails. Obviously, the spam kings aren't inclined to be cooperative on this front, so they had to get creative. How they got the numbers is somewhat interesting in and of itself: They broke in to the Storm botnet and subverted some Storm controllers so a number of the bots would send out spam altered to use links they could track. The text for these email advertising campaigns remained the same; they only changed the links.

The question did come up as to whether this was ethical, as the test did involve unwitting human subjects, but they asserted that these people would have gotten the spam anyhow, and at least their links were malware-free.

Three campaigns were chosen as the focus of their study: one was a standard pharmaceutical campaign. I'm sure you're all familiar with those. The second and third were postcard and April fools' messages designed to infect more computers with the botnet software. Self-propagation for Storm.

I highly recommend you check out their paper for the detailed results, but the things I found most interesting were as follows:

(1) Very little mail actually got through to the recipients.

Using dummy addresses on popular webmail servers and an email hidden behind the popular Barracuda spam-filtering appliance, they found that less than 0.005% of mail got through in most cases. Messages were either dumped into a spam folder, or 75% of messages appeared to be dropped by the servers before delivery was even completed. This is likely due to blacklisting at the server level.

(2) Very few users visited the sites in question

(3) Some people did "infect" themselves by clicking the postcard/april fools site

(4) Many fewer people ordered pharmaceuticals. In fact, so few people did that it's unlikely that the campaign could have made money!

The final conclusion was really the most fascinating one: they gauge it as highly unlikely that the pharmacy site could have made any money given the costs of renting the botnet to send spam. In fact, they guess that spam sending would have to be 20 times cheaper for the pharmacy site to make a profit!

Could it be that spam doesn't pay?

The authors suggest that the pharmaceutical spams must be sent by the owners of the botnets (who thus wouldn't have to pay the rental cost), but I propose an alternate theory: that the only people making money from spam are the people who get paid to run the botnets. Those renting don't know that they won't make money, and the botnet owners sure aren't going to tell them. No, they'll just keep sending low-profit spam to keep up illusions that there are fantastic profits to be made (otherwise why would people send them, right?).

Maybe if I'm lucky, I'm right, and eventually the would-be spam senders will notice and stop paying exorbitant prices for botnets. But I'm afraid I don't hold out too much hope. Still, a very interesting paper, with some very interesting results!

Thursday, November 27, 2008

Physical key security (highlights from ACM CCS)


do not forget the key
Originally uploaded by purplbutrfly
I recently attended the security conference ACM CCS, and I wanted to share some of the talks I really enjoyed at the conference. Many of these are a little outside the scope of web security, but I think you'll find them interesting too!

Today's post is about the paper Reconsidering Physical Key Secrecy: Teleduplication via Optical Decoding by Benjamin Laxton, Kai Wang and Stefan Savage at the University of California, San Diego. This one was almost out of scope even for the conference (which is Computer and Communications Security) because it focused on physical security, and the computer was only involved as a tool to break it.

Mechanical locks and keys are a staple of physical security. A basic key is a piece of metal with notches along one side. When pushed into a lock, the key moves a set of tumblers inside the lock so that the whole thing can be turned, allowing the door (or whatever) to be opened. The thing to note about keys, in this case, is that for a given key manufacturer, those notches only have a set number of possible depths, and there are only a set number of notches. The whole key can be represented as a string of numbers showing the notches.

So what they did, is they built a system that could take a picture of a key and produce that string of numbers. Once you have that string, you can enter it into a key-cutting machine, and voila, you have a copy of that key. (In fact, some keys they showed actually had this number written on the key for easy duplication in case it was lost!)

The thing that was perhaps a little disturbing is how easily they could do this. They could duplicate a key from all sorts of photos, with keys at all sorts of angles. They showed a lot of online photos of people's keys and mentioned the popular "what's in your bag?" meme. Their web searches found many keys that their system could decode and duplicate... often people even gave the address that went with the keys!

Then they got into stuff that really seemed to come out of a spy movie. With a bird spotting scope and a digital camera, they started taking pictures of keys that were further and further away... at 35 feet they could duplicate the key every time. At 65 feet, it took two guesses before they could get all keys. At 100 feet, still only three guesses were necessary. And then they climbed onto the roof of one of the university buildings and took a picture of a set of keys 195 feet away on a table below, and still managed to decode one of them correctly. James Bond apparently could use some modern academic research!

The take-home message here? If you want to keep things physically secure, you'd better make sure no one sees the keys! For more information, check out the complete paper.

Monday, October 27, 2008

SOMA at ACM CCS

I'm off to present at ACM CCS this week. We're talking about our simple web security solution, SOMA. It's a pretty neat little system -- turns out a handful of simple rules can be used to block a lot of current web attacks.

We call it "Same Origin Mutual Approval" because the idea is that all servers involved in making a web page all have to approve before anything gets loaded or included in the page. This means the site providing the page as well as any sites providing content (eg: youtube, flickr...) have to agree that that's ok. It's very simplistic, but surprisingly powerful because a lot of web attacks rely on the fact that the browser currently includes anything without checking, letting attackers include nasty code or send information out by loading other content.

I'm hoping to have my presentation slides online after the conference is done, but for now, I recommend you take a look at the SOMA webpage. There's a brief explanation along with links to our technical report, and the ACM CCS paper should be available soon too.