This is the second in my series of posts about talks I enjoyed at ACM CCS. The first was here.
As some of you may know, my master's thesis involved creation of a spam-detector based on the workings of the human immune system. Forgoing modesty, I'll say that my system was pretty cool (I even got slashdotted) but I couldn't see myself doing spam research forever -- there's only so many times you really want to stand up in front of a room full of academics and try not to make viagra jokes.
I digress. But when I saw the paper entitled "Spamalytics: An Empirical Analysis of Spam Marketing Conversion" on the program, I knew which track to choose for that session.
They wanted to get some numbers showing click-through rates on spam, to see how much money spammers really are making nowadays, and how many people were seeing those emails. Obviously, the spam kings aren't inclined to be cooperative on this front, so they had to get creative. How they got the numbers is somewhat interesting in and of itself: They broke in to the Storm botnet and subverted some Storm controllers so a number of the bots would send out spam altered to use links they could track. The text for these email advertising campaigns remained the same; they only changed the links.
The question did come up as to whether this was ethical, as the test did involve unwitting human subjects, but they asserted that these people would have gotten the spam anyhow, and at least their links were malware-free.
Three campaigns were chosen as the focus of their study: one was a standard pharmaceutical campaign. I'm sure you're all familiar with those. The second and third were postcard and April fools' messages designed to infect more computers with the botnet software. Self-propagation for Storm.
I highly recommend you check out their paper for the detailed results, but the things I found most interesting were as follows:
(1) Very little mail actually got through to the recipients.
Using dummy addresses on popular webmail servers and an email hidden behind the popular Barracuda spam-filtering appliance, they found that less than 0.005% of mail got through in most cases. Messages were either dumped into a spam folder, or 75% of messages appeared to be dropped by the servers before delivery was even completed. This is likely due to blacklisting at the server level.
(2) Very few users visited the sites in question
(3) Some people did "infect" themselves by clicking the postcard/april fools site
(4) Many fewer people ordered pharmaceuticals. In fact, so few people did that it's unlikely that the campaign could have made money!
The final conclusion was really the most fascinating one: they gauge it as highly unlikely that the pharmacy site could have made any money given the costs of renting the botnet to send spam. In fact, they guess that spam sending would have to be 20 times cheaper for the pharmacy site to make a profit!
Could it be that spam doesn't pay?
The authors suggest that the pharmaceutical spams must be sent by the owners of the botnets (who thus wouldn't have to pay the rental cost), but I propose an alternate theory: that the only people making money from spam are the people who get paid to run the botnets. Those renting don't know that they won't make money, and the botnet owners sure aren't going to tell them. No, they'll just keep sending low-profit spam to keep up illusions that there are fantastic profits to be made (otherwise why would people send them, right?).
Maybe if I'm lucky, I'm right, and eventually the would-be spam senders will notice and stop paying exorbitant prices for botnets. But I'm afraid I don't hold out too much hope. Still, a very interesting paper, with some very interesting results!
Tuesday, December 9, 2008
Monday, December 8, 2008
Web Insecurity.net
Web Insecurity.net just got a facelift!
Hope you like the new design. There's a few quirks to be ironed out with the blogger template, but things are definitely looking shiny and new over here!
Hope you like the new design. There's a few quirks to be ironed out with the blogger template, but things are definitely looking shiny and new over here!
Labels:
meta
Thursday, November 27, 2008
Physical key security (highlights from ACM CCS)
I recently attended the security conference ACM CCS, and I wanted to share some of the talks I really enjoyed at the conference. Many of these are a little outside the scope of web security, but I think you'll find them interesting too!
Today's post is about the paper Reconsidering Physical Key Secrecy: Teleduplication via Optical Decoding by Benjamin Laxton, Kai Wang and Stefan Savage at the University of California, San Diego. This one was almost out of scope even for the conference (which is Computer and Communications Security) because it focused on physical security, and the computer was only involved as a tool to break it.
Mechanical locks and keys are a staple of physical security. A basic key is a piece of metal with notches along one side. When pushed into a lock, the key moves a set of tumblers inside the lock so that the whole thing can be turned, allowing the door (or whatever) to be opened. The thing to note about keys, in this case, is that for a given key manufacturer, those notches only have a set number of possible depths, and there are only a set number of notches. The whole key can be represented as a string of numbers showing the notches.
So what they did, is they built a system that could take a picture of a key and produce that string of numbers. Once you have that string, you can enter it into a key-cutting machine, and voila, you have a copy of that key. (In fact, some keys they showed actually had this number written on the key for easy duplication in case it was lost!)
The thing that was perhaps a little disturbing is how easily they could do this. They could duplicate a key from all sorts of photos, with keys at all sorts of angles. They showed a lot of online photos of people's keys and mentioned the popular "what's in your bag?" meme. Their web searches found many keys that their system could decode and duplicate... often people even gave the address that went with the keys!
Then they got into stuff that really seemed to come out of a spy movie. With a bird spotting scope and a digital camera, they started taking pictures of keys that were further and further away... at 35 feet they could duplicate the key every time. At 65 feet, it took two guesses before they could get all keys. At 100 feet, still only three guesses were necessary. And then they climbed onto the roof of one of the university buildings and took a picture of a set of keys 195 feet away on a table below, and still managed to decode one of them correctly. James Bond apparently could use some modern academic research!
The take-home message here? If you want to keep things physically secure, you'd better make sure no one sees the keys! For more information, check out the complete paper.
Today's post is about the paper Reconsidering Physical Key Secrecy: Teleduplication via Optical Decoding by Benjamin Laxton, Kai Wang and Stefan Savage at the University of California, San Diego. This one was almost out of scope even for the conference (which is Computer and Communications Security) because it focused on physical security, and the computer was only involved as a tool to break it.
Mechanical locks and keys are a staple of physical security. A basic key is a piece of metal with notches along one side. When pushed into a lock, the key moves a set of tumblers inside the lock so that the whole thing can be turned, allowing the door (or whatever) to be opened. The thing to note about keys, in this case, is that for a given key manufacturer, those notches only have a set number of possible depths, and there are only a set number of notches. The whole key can be represented as a string of numbers showing the notches.
So what they did, is they built a system that could take a picture of a key and produce that string of numbers. Once you have that string, you can enter it into a key-cutting machine, and voila, you have a copy of that key. (In fact, some keys they showed actually had this number written on the key for easy duplication in case it was lost!)
The thing that was perhaps a little disturbing is how easily they could do this. They could duplicate a key from all sorts of photos, with keys at all sorts of angles. They showed a lot of online photos of people's keys and mentioned the popular "what's in your bag?" meme. Their web searches found many keys that their system could decode and duplicate... often people even gave the address that went with the keys!
Then they got into stuff that really seemed to come out of a spy movie. With a bird spotting scope and a digital camera, they started taking pictures of keys that were further and further away... at 35 feet they could duplicate the key every time. At 65 feet, it took two guesses before they could get all keys. At 100 feet, still only three guesses were necessary. And then they climbed onto the roof of one of the university buildings and took a picture of a set of keys 195 feet away on a table below, and still managed to decode one of them correctly. James Bond apparently could use some modern academic research!
The take-home message here? If you want to keep things physically secure, you'd better make sure no one sees the keys! For more information, check out the complete paper.
Labels:
academia,
CCS,
physical security
Monday, October 27, 2008
SOMA at ACM CCS
I'm off to present at ACM CCS this week. We're talking about our simple web security solution, SOMA. It's a pretty neat little system -- turns out a handful of simple rules can be used to block a lot of current web attacks.
We call it "Same Origin Mutual Approval" because the idea is that all servers involved in making a web page all have to approve before anything gets loaded or included in the page. This means the site providing the page as well as any sites providing content (eg: youtube, flickr...) have to agree that that's ok. It's very simplistic, but surprisingly powerful because a lot of web attacks rely on the fact that the browser currently includes anything without checking, letting attackers include nasty code or send information out by loading other content.
I'm hoping to have my presentation slides online after the conference is done, but for now, I recommend you take a look at the SOMA webpage. There's a brief explanation along with links to our technical report, and the ACM CCS paper should be available soon too.
We call it "Same Origin Mutual Approval" because the idea is that all servers involved in making a web page all have to approve before anything gets loaded or included in the page. This means the site providing the page as well as any sites providing content (eg: youtube, flickr...) have to agree that that's ok. It's very simplistic, but surprisingly powerful because a lot of web attacks rely on the fact that the browser currently includes anything without checking, letting attackers include nasty code or send information out by loading other content.
I'm hoping to have my presentation slides online after the conference is done, but for now, I recommend you take a look at the SOMA webpage. There's a brief explanation along with links to our technical report, and the ACM CCS paper should be available soon too.
Labels:
academia,
CCS,
javascript,
SOMA,
web security
Wednesday, October 15, 2008
What constitutes new? Why buzzword bingo might help security.
Last week, I was reading through the web security mailing list. The topic of the day was ClickJacking, which of course had come under fire because it's not really that new. Critics accused it of being just another useless trendy buzzword applied to a specific style of Cross Site Request Forgery.
This caught my attention for two reasons:
(a) This was my first reaction to the announcement. I'd talked about this sort of attack with colleagues at the university months (maybe over a year?) ago. My first experience that got me thinking about what is now called clickjacking was a car ad that overlaid a huge chunk of a page I was visiting. It was a flash thing that just made a car drive across the page. Harmless, except that it happened to cover something I wanted to click on at the time. And it made me realise -- there's no reason my click supposedly on that ad couldn't result in me clicking something else I didn't want to click on that page... I've been suspicious of those "x to close" things on ads ever since.
If I'd realised I could just give it a shiny new name and publish, we could have gotten some nice papers out of it. Oh well. It seemed so obvious, though, what was the point?
(b) This was actually one of the reactions we got for the next paper I'll be presenting at a conference. Roughly translated, the reviewer said "It's not really that new an idea, but it's a nicely combined set of protections." The reviewer recommended us anyhow and the paper was accepted.
I didn't agree that our solution wasn't novel, but I could definitely agree that it clearly synthesized ideas from other sources (in fact, we'd made this clear in the paper!). If we assumed that anything made from wood was more or less the same and not novel or worthy of note, Ikea would be out of business, though. ;) It's an important part of science to learn which things are related and how they can influence each other. Why shouldn't it be a useful part of computer science?
The author of this web security mailing list post got me thinking further about buzzwords and media-awareness however:
As someone with a fair amount of biology training, I know the answer to this. People connect much better to the Sugar Maple than they do to its scientifically useful name, Acer saccharum. Do you care about Danaus plexippus or is it the words Monarch Butterfly that would bring to mind the delicate migrators? And honestly? As long as you don't overdo it, having "common" names for things just makes it easier to communicate about them.
And communicating about web security issues is clearly something we need to do. With many web programmers convinced that they don't need to write secure code because they're not handling traditional targets such as credit cards, it's leaving a lot of people at risk. Part of the reason is that security sounds complex, and it's filled with "if you mess this up at all, your entire system is insecure" leading people toss up their hands. Everyone knows how easy it is to make a mistake, so what's the point?
If a new name and some media attention helps people communicate and maybe even realise that they are at risk and that mitigating it might be a good idea, we might be one step closer to a more secure world. "Oh, that's not new," may be true, but it can lead people to believe that they can go back to their dangerous assumptions that all is well in their worlds...
So next time, I'm going to think twice about dismissing the latest buzzword. It may be doing more good than I think!
This caught my attention for two reasons:
(a) This was my first reaction to the announcement. I'd talked about this sort of attack with colleagues at the university months (maybe over a year?) ago. My first experience that got me thinking about what is now called clickjacking was a car ad that overlaid a huge chunk of a page I was visiting. It was a flash thing that just made a car drive across the page. Harmless, except that it happened to cover something I wanted to click on at the time. And it made me realise -- there's no reason my click supposedly on that ad couldn't result in me clicking something else I didn't want to click on that page... I've been suspicious of those "x to close" things on ads ever since.
If I'd realised I could just give it a shiny new name and publish, we could have gotten some nice papers out of it. Oh well. It seemed so obvious, though, what was the point?
(b) This was actually one of the reactions we got for the next paper I'll be presenting at a conference. Roughly translated, the reviewer said "It's not really that new an idea, but it's a nicely combined set of protections." The reviewer recommended us anyhow and the paper was accepted.
I didn't agree that our solution wasn't novel, but I could definitely agree that it clearly synthesized ideas from other sources (in fact, we'd made this clear in the paper!). If we assumed that anything made from wood was more or less the same and not novel or worthy of note, Ikea would be out of business, though. ;) It's an important part of science to learn which things are related and how they can influence each other. Why shouldn't it be a useful part of computer science?
The author of this web security mailing list post got me thinking further about buzzwords and media-awareness however:
"Which one is the proper way to describe the attack vector? The one labeled with the shiny new name or the one with the more technically-accurate name? And which one had the most positive impact, that is, which one educated the most people? And finally, should security researchers package security issues for media consumption?"
As someone with a fair amount of biology training, I know the answer to this. People connect much better to the Sugar Maple than they do to its scientifically useful name, Acer saccharum. Do you care about Danaus plexippus or is it the words Monarch Butterfly that would bring to mind the delicate migrators? And honestly? As long as you don't overdo it, having "common" names for things just makes it easier to communicate about them.
And communicating about web security issues is clearly something we need to do. With many web programmers convinced that they don't need to write secure code because they're not handling traditional targets such as credit cards, it's leaving a lot of people at risk. Part of the reason is that security sounds complex, and it's filled with "if you mess this up at all, your entire system is insecure" leading people toss up their hands. Everyone knows how easy it is to make a mistake, so what's the point?
If a new name and some media attention helps people communicate and maybe even realise that they are at risk and that mitigating it might be a good idea, we might be one step closer to a more secure world. "Oh, that's not new," may be true, but it can lead people to believe that they can go back to their dangerous assumptions that all is well in their worlds...
So next time, I'm going to think twice about dismissing the latest buzzword. It may be doing more good than I think!
Labels:
buzzwords,
clickjacking,
communication,
web security
Monday, September 15, 2008
Where's the JavaScript
As part of some investigation for my thesis, I made myself a little add-on for Mozilla Firefox that shows where in the page that JavaScript has been included. I'd been doing this sort of investigation by reading the code myself, but although that told me useful things, it wasn't ideal for communicating things to other people.
My add-on shows inclusion of new JavaScript (using a script tag) by putting a red border on the parent tag, and it shows JavaScript called from the onMouseover, onLoad, onClick, etc. attributes in blue.
One of the most interesting things I've found is that these are actually relatively predictable things. If there's an expanding menu, there's probably some JavaScript. Certain types of forms. Content that you'd expect to be external. Links that involve pop-ups. Embedded content from other sources.
Take a look at the way the add-on colours this weather site:
Once you've seen a few of the things it colours, you could guess a lot of the rest.
The question now is... Can this predictability be a helpful tool in developing more secure web pages?
My add-on shows inclusion of new JavaScript (using a script tag) by putting a red border on the parent tag, and it shows JavaScript called from the onMouseover, onLoad, onClick, etc. attributes in blue.
One of the most interesting things I've found is that these are actually relatively predictable things. If there's an expanding menu, there's probably some JavaScript. Certain types of forms. Content that you'd expect to be external. Links that involve pop-ups. Embedded content from other sources.
Take a look at the way the add-on colours this weather site:
Once you've seen a few of the things it colours, you could guess a lot of the rest.
The question now is... Can this predictability be a helpful tool in developing more secure web pages?
Labels:
javascript
Tuesday, July 29, 2008
What does security mean for web 2.0?
Clearly there is no widely accepted view of what security means in the Web 2.0 software development era. We’re still trying to figure things out and convince ourselves that we have the right answer. Or that someone does.
This is taken from a survey of web application security professionals. It's not a terribly scientific survey by any means, but I think it's interesting reading despite vague questions and a somewhat undefined audience.
The above quote really sums up what I got out of the article: that no one's really sure what web security means. The addendum to that is that people seem to feel that more is needed, but there is general skepticism about the existing tools (see the section in there about web application firewalls, for example, where 54% of respondents said they were skeptical, although open-minded, or the question above on web application vulnerability scanners).
The survey mirrors the sorts of impressions I've been getting from people I talk to both locally and at conferences, so if you're curious about what people think of web security, I think it's worth checking out the pretty graphs given in that survey as well as the author's commentary.
Labels:
link,
security professionals,
survey,
web 2.0,
web security
Friday, March 28, 2008
More cuteness in JavaScript comments:
//OhNoRobot.com search code for The Devil's Panties at
// devilspanties.keenspot.com
//OhNoRobot is powered by hugs. Also: Javascript!
Labels:
cute,
javascript
Thursday, March 6, 2008
After much resistance, I finally joined Facebook... because I wanted to see what their JavaScript looked like. I admit, I could have done this without signing up properly, but just as I was contemplating signing up, someone sent me an email with links to baby pictures and I finally caved.
Anyhow, I haven't seen anything too spectacularly interesting in their code yet, but this snippet did make me laugh:
function URI(uri){if(uri===window){Util.error('what the hell are you doing');return;}
Classy, eh?
Anyhow, I haven't seen anything too spectacularly interesting in their code yet, but this snippet did make me laugh:
function URI(uri){if(uri===window){Util.error('what the hell are you doing');return;}
Classy, eh?
Labels:
javascript
Friday, February 15, 2008
Wait, did that look like that before?
Wait a second... In a previous post, I noted that gmail just quietly downgraded to HTML if you didn't have JavaScript turned on. But today, I noticed this message:
They could use a small fix to their formatting (ie: don't let the poor text jam into the side of the box like that -- I had to grab some of the surrounding window so this screenshot would be legible) but this is strangely more helpful than it was before.
Why the difference?
Well, much as I like to believe someone at Google saw my comments and made the change, I'm not quite arrogant enough to believe that's true. Although I suppose it could be -- there's a lot of Google people out there, and for all I know they've got something that scans Blogger for mentions of their products. It would be a clever, if time-consuming, way to find out what the public really thinks.
Err, I digress. Self-centred worldviews aside, I'd guess it more likely that this message has always been there, and I just missed it last time because of my NoScript configuration.
Why do I find this interesting? Well, I'm currently working on a theory that users will be more safe if they can disable JavaScript that they don't really need to run the page. This is the theory underlying NoScript, and it has some face validity. But if users start running only some JavaScript, what is this going to do to the usability of the web? My current answer is that if you leave JavaScript off entirely, you're going to turn some pages into a usability nightmare, where things will just not work (more on this later). But these different error messages based on my various setups indicate to me that you may have these usability problems even if you have partial JavaScript. In fact, the usability problems may be much worse because the page won't know to generate an appropriate error message!
I don't know how to solve this problem yet, but I guess that's what makes this research!
They could use a small fix to their formatting (ie: don't let the poor text jam into the side of the box like that -- I had to grab some of the surrounding window so this screenshot would be legible) but this is strangely more helpful than it was before.
Why the difference?
Well, much as I like to believe someone at Google saw my comments and made the change, I'm not quite arrogant enough to believe that's true. Although I suppose it could be -- there's a lot of Google people out there, and for all I know they've got something that scans Blogger for mentions of their products. It would be a clever, if time-consuming, way to find out what the public really thinks.
Err, I digress. Self-centred worldviews aside, I'd guess it more likely that this message has always been there, and I just missed it last time because of my NoScript configuration.
Why do I find this interesting? Well, I'm currently working on a theory that users will be more safe if they can disable JavaScript that they don't really need to run the page. This is the theory underlying NoScript, and it has some face validity. But if users start running only some JavaScript, what is this going to do to the usability of the web? My current answer is that if you leave JavaScript off entirely, you're going to turn some pages into a usability nightmare, where things will just not work (more on this later). But these different error messages based on my various setups indicate to me that you may have these usability problems even if you have partial JavaScript. In fact, the usability problems may be much worse because the page won't know to generate an appropriate error message!
I don't know how to solve this problem yet, but I guess that's what makes this research!
Another cute error message
One of my labmates pointed this one out:
That is possibly the most adorable of the JavaScript error messages
That is possibly the most adorable of the JavaScript error messages
Labels:
cute,
disabling javascript,
error messages,
javascript
Sunday, February 10, 2008
Patented JavaScript
Another interesting line I turned up in examining JavaScript:
Now, given how much JavaScript I've found that's obfuscated, I shouldn't be too surprised to see patent numbers in there, but I was!
//hbx.js,HBX2.0,COPYRIGHT 1997-2006 WEBSIDESTORY,INC.(Wrapped by me so you can read it)
ALL RIGHTS RESERVED. U.S.PATENT No.6,393,479B1 &
6,766,370. INFO:http://websidestory.com/privacy
Now, given how much JavaScript I've found that's obfuscated, I shouldn't be too surprised to see patent numbers in there, but I was!
Labels:
copyright,
javascript,
patents
Best software conditions ever
This is just too amusing not to share. Got to love the conditions (I've coloured them to stand out) on this particular piece of code:
I turned that up on CNN.com as I'm just roughly examining various types of code to see if I can see obvious similarities.
I wonder if the author believes CNN.com is good or evil?
/*
Copyright (c) 2005 JSON.org
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or
sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The Software shall be used for Good, not Evil.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
*/
I turned that up on CNN.com as I'm just roughly examining various types of code to see if I can see obvious similarities.
I wonder if the author believes CNN.com is good or evil?
Labels:
copyright,
cute,
javascript
Saturday, February 9, 2008
The web without JavaScript. Part 2: Black Holes and Revelations
As I implied in Part 1, while sites do sometimes provide helpful error messages related to JavaScript, often as not they just behave strangely.
Perhaps the most common issue I've seen is missing content. The things I notice most often are missing ads and missing video. Sometimes, it's nice and obvious that there's a missing element on the page:
Many pages leave very obvious spaces for their ads, and when they're filled with blank space, it's fairly obvious that there's a problem.
The videos are less obvious, however:
There's a video in there. Really. Normally, it would appear right below the header, so the page would look more like this:
There you can see the video loading in the big black box. But how would you tell that the previous page had anything missing? The page has nicely moved the text up, leaving no trace that there should be something there. In the case of the missing video, there are usually only a few clues:
Usually, the winning clue is #2, since a friend will send me a link and mention that it's a video, or the comments on the page will talk about the video, or sometimes the text itself will tip me off by what it says.
And often, you'll see both missing spaces and the lack thereof on the same page. The page featured below would normally have both an ad and a video:
Could you tell there was a video on this page? You can see the blank space for an advertisement, but the text automatically moves up so you can't tell that the page with the video looks like this:
That's the video in bright yellow at the bottom there.
But it gets even more fun when you've changed which sites are JavaScript disabled in NoScript. Check out that same site with all the JavaScript disabled:
They're pretty smart! If they can tell that JavaScript is disabled (ie: I've disabled it for the main site) then they both provide the helpful error text AND they provide a ad, showing that you don't really need JavaScript to do it. Unfortunately, my weird way of disabling some JavaScript but not others had limited their ability to do damage control on the page I was trying to break. Interesting...
Next up in this series: Sites that have more than a few holes, and sites that just don't work without their JavaScript!
Perhaps the most common issue I've seen is missing content. The things I notice most often are missing ads and missing video. Sometimes, it's nice and obvious that there's a missing element on the page:
Many pages leave very obvious spaces for their ads, and when they're filled with blank space, it's fairly obvious that there's a problem.
The videos are less obvious, however:
There's a video in there. Really. Normally, it would appear right below the header, so the page would look more like this:
There you can see the video loading in the big black box. But how would you tell that the previous page had anything missing? The page has nicely moved the text up, leaving no trace that there should be something there. In the case of the missing video, there are usually only a few clues:
- The page looks abnormally short (there isn't much text)
- I'm expecting a video on the page, and it's not there.
- I happen to check the JavaScript list from NoScript and notice something that looks like video.* or sounds like a domain that might host video.
Usually, the winning clue is #2, since a friend will send me a link and mention that it's a video, or the comments on the page will talk about the video, or sometimes the text itself will tip me off by what it says.
And often, you'll see both missing spaces and the lack thereof on the same page. The page featured below would normally have both an ad and a video:
Could you tell there was a video on this page? You can see the blank space for an advertisement, but the text automatically moves up so you can't tell that the page with the video looks like this:
That's the video in bright yellow at the bottom there.
But it gets even more fun when you've changed which sites are JavaScript disabled in NoScript. Check out that same site with all the JavaScript disabled:
They're pretty smart! If they can tell that JavaScript is disabled (ie: I've disabled it for the main site) then they both provide the helpful error text AND they provide a ad, showing that you don't really need JavaScript to do it. Unfortunately, my weird way of disabling some JavaScript but not others had limited their ability to do damage control on the page I was trying to break. Interesting...
Next up in this series: Sites that have more than a few holes, and sites that just don't work without their JavaScript!
Monday, February 4, 2008
What does the web look like without JavaScript? Part 1: Error Messages
So what does the web look like without JavaScript? This post focuses on the error messages you see when you decide to ditch the JavaScript, but the sad reality is that although some sites will give you warnings, this is hardly the norm. Still, it's worth looking at what you might see...
Without JavaScript, occasionally the web looks like this:
That's a nice big red error message indicating that there's no JavaScript. Simple, clear, informative, lets you know where to go for help, or even lets you use the website for things that don't require JavaScript.
In a similar vein, you sometimes get error messages like this one:
I find it hilarious that it first tells me that JavaScript is turned off, then tells me what to in the event that JavaScript redirection isn't working... even though if I saw this page at all, JavaScript redirection won't work. But maybe I'm too easily amused.
Anyhow, similarly, it lets you know in nice big red letters what the issue is and how to fix it. Good good.
But this isn't the norm among pages. Sometimes, you get error messages more like this one:
Well, it could be JavaScript, or maybe something else is wrong. Here's how to get Flash player! Err, that's almost helpful. I can see a lot of people reinstalling Flash player and assuming it was broken when JavaScript is the real culprit.
Also, although it's fairly clear where the error message is when you've got a nice little page fragment like this, it's pretty easy to miss that black text on a page with lots of black text and little images and video responses and so on and so on. Especially if you're looking at a video site where really, you're scanning the page for the big video window and mentally blocking out all the text, which you know isn't what you came to the page to see.
And then there's the not-quite-an-error message route:
Okay, so I know that the reason gmail is showing in basic HTML is that I don't have JavaScript enabled, because I've been out messing with it. But if you, say, sat down at my computer and tried to log in to gmail, you'd be asking me why it looks so funny on the mac. Or at least, that's how the friends who've tried to use my laptop reacted when I left things like this.
I do love how Google automatically downgrades when possible (and it does this with a lot of services) but sometimes it might be worth letting people know why you're seeing the reduced interface. This is really apparent if you use Google maps, which only gives driving directions (no maps!) if you have JavaScript disabled and search for one address to another instead of a single address. Very confusing if you're not the one who disabled JavaScript, or you did it because of some unrelated thing and didn't realise it was going to break the web.
But it's still better than no error message at all combined with pages that just don't work, which seems to be very common. Stay tuned for more broken pages!
Without JavaScript, occasionally the web looks like this:
That's a nice big red error message indicating that there's no JavaScript. Simple, clear, informative, lets you know where to go for help, or even lets you use the website for things that don't require JavaScript.
In a similar vein, you sometimes get error messages like this one:
I find it hilarious that it first tells me that JavaScript is turned off, then tells me what to in the event that JavaScript redirection isn't working... even though if I saw this page at all, JavaScript redirection won't work. But maybe I'm too easily amused.
Anyhow, similarly, it lets you know in nice big red letters what the issue is and how to fix it. Good good.
But this isn't the norm among pages. Sometimes, you get error messages more like this one:
Well, it could be JavaScript, or maybe something else is wrong. Here's how to get Flash player! Err, that's almost helpful. I can see a lot of people reinstalling Flash player and assuming it was broken when JavaScript is the real culprit.
Also, although it's fairly clear where the error message is when you've got a nice little page fragment like this, it's pretty easy to miss that black text on a page with lots of black text and little images and video responses and so on and so on. Especially if you're looking at a video site where really, you're scanning the page for the big video window and mentally blocking out all the text, which you know isn't what you came to the page to see.
And then there's the not-quite-an-error message route:
Okay, so I know that the reason gmail is showing in basic HTML is that I don't have JavaScript enabled, because I've been out messing with it. But if you, say, sat down at my computer and tried to log in to gmail, you'd be asking me why it looks so funny on the mac. Or at least, that's how the friends who've tried to use my laptop reacted when I left things like this.
I do love how Google automatically downgrades when possible (and it does this with a lot of services) but sometimes it might be worth letting people know why you're seeing the reduced interface. This is really apparent if you use Google maps, which only gives driving directions (no maps!) if you have JavaScript disabled and search for one address to another instead of a single address. Very confusing if you're not the one who disabled JavaScript, or you did it because of some unrelated thing and didn't realise it was going to break the web.
But it's still better than no error message at all combined with pages that just don't work, which seems to be very common. Stay tuned for more broken pages!
Labels:
disabling javascript,
error messages,
javascript,
usability
Friday, February 1, 2008
Want to be safe from malicious web scripts?
Want to be safe from malicious web scripts? The solution, apparently, is to disable JavaScript.
It's always that last line of the security bulletin, the reminder that if we just didn't run this code, we'd be safe from the latest Facebook abuse, bad mojo in Yahoo, or whatever the (bad) flavour of the week is. But really, you might as well tell people that the only way to protect their computer is turn it off, lock it in a dark bunker disconnected from the world, and throw away the key. Sure, that'll keep it from getting the latest piece of web crud, but the machine won't do you very much good.
Think I'm exaggerating? Try turning off JavaScript and see how long you last before you need to turn it back on. The first time I tried it, I lasted half a day before I needed to change some configuration on my router and found that the settings pages wouldn't even load properly with JavaScript disabled.
However, I was raised by scientists. My parents are the sort of people who, when the stove clock broke, gave it to me and my brother, showed us how to use some screwdrivers and other hand tools, then let us experiment on the remains. I'd love to claim we somehow fixed it, but no, we just found new ways to break it and put parts of it back together in weird ways. But my parents are smart people: taking things apart and breaking them does teach you a fair bit about them. And now that we're older, we can put them back together as well as take them apart.
So with that thought in mind, I realised that if I was going to build a safer web, I needed to know how to take it apart and put it back together. In the "breaking things" phase, I decided I needed a nicer way to turn JavaScript on and off on a whim so I could see what else didn't work. Thankfully, Firefox has a lovely little add-on called NoScript which lets me disable or enable JavaScript on a per domain basis. I wouldn't recommend it to novices, but I'm a trained professional, so I set out to learn some stuff.
With that tool, I was ready to start breaking my web.
It's always that last line of the security bulletin, the reminder that if we just didn't run this code, we'd be safe from the latest Facebook abuse, bad mojo in Yahoo, or whatever the (bad) flavour of the week is. But really, you might as well tell people that the only way to protect their computer is turn it off, lock it in a dark bunker disconnected from the world, and throw away the key. Sure, that'll keep it from getting the latest piece of web crud, but the machine won't do you very much good.
Think I'm exaggerating? Try turning off JavaScript and see how long you last before you need to turn it back on. The first time I tried it, I lasted half a day before I needed to change some configuration on my router and found that the settings pages wouldn't even load properly with JavaScript disabled.
However, I was raised by scientists. My parents are the sort of people who, when the stove clock broke, gave it to me and my brother, showed us how to use some screwdrivers and other hand tools, then let us experiment on the remains. I'd love to claim we somehow fixed it, but no, we just found new ways to break it and put parts of it back together in weird ways. But my parents are smart people: taking things apart and breaking them does teach you a fair bit about them. And now that we're older, we can put them back together as well as take them apart.
So with that thought in mind, I realised that if I was going to build a safer web, I needed to know how to take it apart and put it back together. In the "breaking things" phase, I decided I needed a nicer way to turn JavaScript on and off on a whim so I could see what else didn't work. Thankfully, Firefox has a lovely little add-on called NoScript which lets me disable or enable JavaScript on a per domain basis. I wouldn't recommend it to novices, but I'm a trained professional, so I set out to learn some stuff.
With that tool, I was ready to start breaking my web.
Subscribe to:
Posts (Atom)