Tuesday, August 18, 2009

There's a lot of technology groups in Wellington. I decided to get together a list of all of them so that we could see if we clash on our regular meeting days. Please let me know if your group is missing, or the details need updating.

Most of these groups run free events with the support of their sponsors!

Form more info about geek events in Wellington, head over to wellington.geek.nz or dot.net.nz.

Microsoft technology focussed:

Other technologies:

Technology 'agnostic':

Happy Geeking!

Kirk

posted on Tuesday, August 18, 2009 2:26:03 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [2]
 Friday, August 14, 2009

Craig, Owen and I (all from Xero) went along to the Wellington Summer of Code session last night to demo Visual Studio, the .NET runtime and ASP.NET MVC to 30-ish eager and willing University Students.

It was an interesting time. Allfields hosted us in a couple of their training rooms, which was pretty cool as the students got to follow along using their own copies of Visual Web Developer. The Allfields facility is pretty good - each room had about 20 PCs for students to use, and the guys there set up a video link between the two rooms.

Students: If you're got .NET questions, be sure to sign up to the dot.net.nz mailing lists.

I'm looking forward to meeting with the students again as the programme continues, and hopefully work with one of them at Xero!

Kirk

posted on Friday, August 14, 2009 10:02:20 AM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]
 Monday, July 13, 2009

If you store, transmit or process credit card data, PCI applies.

How can OWASP help you with PCI compliance?

Credit card data:

  • Primary Account Number (PAN): Can store it, but protection required.
  • Can never store the CVD 3 digit number or mag stripe

Card data attacks have been increasing in sophistication.

PCI-DSS affects anyone who transmits, processes or stores payment card data. E.g. merchants, service providers (e.g. Paymark, DPS).

Look at 12 requirements of PCI-DSS (firewalls, storage etc)

Protecting stored data:

You must not store sensitive authentication data. Principle: if you don't need it, don't store it. Consider outsourcing, truncation, tokenisation.

Tokenisation: Replace PAN with a unique identifier "token"

Truncation: don't store all the data (e.g. first 4, last 4 digits)

Encryption: Encrypt at point of capture, only decrypt when required, use industry standard encryption, protect your keys.

Developing secure applications / Test app was built securely / Use secure coding guidelines:

Standard OWASP guidelines

Annual risk assessment:

Every year, new threats will affect your site. Go and re-assess against the new threats.

 

Fixing legacy systems: make sure no old data is lying around.

Real life example: it's very easy to mess up (example of reverting to old code)

Parting thoughts: achieve, maintain and validate compliance. Secure development is a key activity. OWASP is a good source. Reduce storage of PAN data.

posted on Monday, July 13, 2009 3:46:55 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]

Bug chaining - an idea that hasn't really propagated yet.

How do we rate how severe a bug is? Consider how easy it is to exploit, where it is accessible from (client-side, server-side, internet, local, mass exploitable, targeted exploit, etc).

Audience attempted to rate the severity of a couple of bugs:

  • SQL injection on authenticated site -> medium/high
  • File upload php files on authenticated site -> high/critical
  • Local file disclosure -> medium/high
  • XSS - reflective, authenticated -> low/medium

Is attacker considered 'authenticated' once there is an XSS attack? Any subsequent attacks can be treated as authenticated.

When you join together the XSS bug with the file upload bug, then it's critical!

Bug chaining: taking multiple bugs and chaining them together to create exploitable vulnerabilities. Instead of looking at each individual bug, look at how they can be combined together.

There are now frameworks to help chain together exploits - and this is how a lot of worms now work.

Recent examples of chaining exploits: PHPMyAdmin <= 3.1.3; SugarCRM <= 5.2.0e - compromise server through 3 bugs together.

How to deal with this? CVSSv2:

  • Common Vulnerability Scoring System v2.0
  • Scoring system for assessing bugs
  • Considers exploit complexity, application location, authentication, target likelihood etc
  • Can be very complex, time consuming, difficult to follow

"You can explain this stuff all day, but when network admins actually see you do it, that's when they understand" Brett Moore

VtigerCRM - large open-source CRM system which fixed problems with a security patch, but don't link to the fix (and haven't installed it themselves!).

He wrote a BeEf module for VtigerCRM that can run as an auto-run module (took less than 2 hours to write):

  • Chains file upload and XSS bug to upload a malicious PHP script to start a command shell
  • Connection is from server to the attackers machine, so user doesn't need to stay connected

Summary:

Don't look at severity of individual bugs - need to look at how bugs can be joined together.

Understand the bugs.

Follow the OWASP coding and testing guidelines.

Tools:

  • BeEf - command console for an attacker to run script on the client computer. Modular list of exploits, and control multiple victims. Autorun modules to automatically execute modules within 1.5-2 seconds.
posted on Monday, July 13, 2009 2:57:28 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]

Firefox extensions: They're just software, like ActiveX. Extend, modify and control the browser.

Firefox extension points:

  • XUL: XML user interface language
  • XBL: XML Binding Language - logical behaviour of widgets
  • XPCOM: Reusable components, interface to file system etc.
  • XPConnect: Allows Javascript to connect to XPCOM
  • Chrome: Special browser zone that is fully trusted by firefox - code is fully trusted, has access to filesystem, user passwords etc.

Mozilla security extension model is non-existent. All extensions are fully trusted by Firefox - no boundaries between extensions, they can modify each other without the user knowing. Can be coded in C++ and subject to memory corruption etc.

Extensions are very popular (billion downloads) and can be found everywhere - social networks, search engines, software packages (skype, anti-virus), anti-phishing toolbars.

Biggest problem is the human side of things - Addins.mozilla.org recommend extensions and add a 'recommended' icon next to them. Extension source code isn't read by third parties ("It's not the linux kernel").

There's no protection from an extension with a security problem, it will bypass any other phishing / malware protection extensions.

Extensions aren't signed (even the Mozilla ones), so we can't rely on people checking signatures.

If an extension is originally trusted, then subsequent updates won't go through the same review process.

No current guidelines for testing a Firefox extension, so security-assessement.com havce come up with their own methodology (whitepaper to be released this year, early next year):

  • Isolated testing: Only test one extension at a time, on different OSes with different Firefox versions.
  • Information gathering: How does the extension work, how is it installed? Look inside the extension package (a zip file) and look for malicious files (e.g. .exe, .msi etc)
  • Look for XPInstall API functions that are dangerous (e.g. executing code on install)
  • Look for suspicious files in the extension folder (e.g. softlinks to other directories)
  • Look inside install.rdf - some tags can hide extensions so they don't appear in the addon manager
  • Extensions can have the same description as other installed extensions, so two appear in addon manager
  • Does the extension try to trick the user into thinking it's verified?
  • Look for pointers outside the extension, or flags that expose the extension object or content to untrusted code (e.g. contentaccessible=yes or xpcnativewrappers=no)
  • Extensions can be merged into the firefox UI - e.g. top toolbar, bottom status bar. They can also modify existing buttons e.g. Reload, Back, Forward or Home button.
  • Use the extension. Check the DOM of a test page with the extension loaded (they used mozreply to do this)
  • Debugging: can set breakpoints using Javascript debugger.
  • Sandbox: can be sidestepped by replacing code inside the sandbox or evaluating it from outside
  • XPCOM components: .dll or .so - compiled code that the extension may ship with, or may use existing components on the machine. May need to review source code or decompile. A bunch of components to watch out for.
  • wrappedJSObject: removes the protection of the XPComComponent, so they are avoiding the firefox protection.
  • Watch out for callback functions, which may be replaced / modified
  • window.OpenDialog: Opens any URI with elevated chrome privileges
  • Auth: Some expose credentials in plain text, e.g. GET or basic auth
  • Auth: Some expose functionality via javascript that can side-step normal process
  • Skype extension - a javascript call that any web page can use to start dialing your skype to any
  • XSS: Watch out for XSS issues - can execute in the chrome zone from DOM events, embedded XSS, recursive iframes
  • XSS: Extensions loading external scripts

They have applied their methodology to different extensions, and some responses have been slow or non-existent!

Here are some extensions that were demoed and had problems. They are all common or Mozilla recommended (all these have been fixed):

  • FireFTP: Could include malicious code in the welcome method of an FTP server, and the browser would execute it. Showed a proof of concept sending the contents of win.ini to a different server, and using BeEf to control client.
  • CoolPreviews: Susceptible to XSS if a data:// URI is used. Showed a remote code execution when right-clicking on a link and previewing it with CoolPreviews.
  • WizzRSS: HTML and Javascript in the <description> tag of RSS feeds is executed in the chrome zone. Showed a reverse shell onto the Windows machine from a malicious users machine.

Extension developers and vendors haven't got a security disclosure process yet - they don't know how to deal with the issues yet. Some extensions don't even publish an email address for the author.

Tools:

  • Firebug
  • MozRepl
  • BeEf - command console for an attacker to run script on the client computer.
posted on Monday, July 13, 2009 2:19:53 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]

With shift to web services, where we are relying on client to secure stuff, we have to remember not to trust the client.

Gave a methodology for testing web services:

  • Service discovery:
    • Look for WSDL or similar files that contain service info, using search engines, site spidering or looking at app behaviour
  • Method discovery:
    • Look inside the WSDL to see what methods are available, or if there isn't one, you can brute force the webservice with common method names to find ones that exist.
  • OWASP top 10. These still all apply to web service calls, including:
    • Malicious file execution, insecure direct object reference,
    • CSRF with AJAX clients
    • Information leakage
    • Broken auth and session mgmt
    • Insecure crypto storage
    • Insecure communications - SSL is important
    • Failure to restrict URL access - protect admin etc web services from anonymous access
  • Web service specific tests:
    • XML issues (external entities, malformed XML, recursive XML, XML entity expansion, XML attribute blowup, overlarge XML and CDATA injection)
      • Can find out details inside the secure network, and CSRF etc machines in there.
    • WS-Routing issues
  • WS-Security is not a panacea - secures the method integrity and confidentiality, but doesn't stop bad stuff coming through.

Tools shown:

posted on Monday, July 13, 2009 11:47:27 AM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]

If you don't own the 3 OWASP books, you've failed.

We're still facing the same vulnerabilities we already have, because we are doing something wrong. Maybe it's security professionals that are doing something wrong, by not educating developers properly.

Big security companies still having problems with their websites.

Most vulnerabilities are well known.

Security people don't write code. developers do. They don't "get" security:

  • Don't fix the root cause
  • Don't understand the threat
  • Most have never seen a vulnerability exploited

Sitting down with developers and stepping them through a vulnerability helps show them the light and they understand and think about vulnerabilities.

Talk today designed to show developers exploits in action.

Tools showed:

  • Burp - proxy tool for intercepting requests
  • A custom sitemap tool that Insomnia uses
  • An MS-SQL Enumeration tool that takes a vulnerable url and pulls out all the DB info using the master db to enumerate tables
  • ASPX Spy - if you can get this ASP.NET file up on to a server and run, it provides a UI for playing around with the OS.
  • SQL Map - an automatic SQL injection tool - can enumerate the DB, even if the data is not displayed by inferring the state of the db based on the page output.

Problems shown:

  • Robots.txt is not a place to list parts of your site that you don't want people to know about :)
  • Buying -1 quantity of a $1000 book leads to the users credit on the shopping site increasing by $1000 :)
  • XML parsing vulnerability that allows external entities to be referenced in the XML provided to a web service - which can pull the contents of a file off the server.
  • Query string parameters passed to the command interpreter, and used for file names.
  • PHP include let's you include PHP source from another web server (looks like you need to disable URL fopen wrappers).
  • Only securing GET requests to an admin directory.
  • Showed a fake version of the CCIP website with multiple problems.
  • Admin interface for a website is exposed to the internet.

Open questions:

  • Who owns server configuration? Architects, developers, system administrators? If server or framework config changes, then we're insecure.
  • Is it security professionals job to make sure problems are corrected?
posted on Monday, July 13, 2009 10:37:46 AM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]

Paul raised the question: "Is internet security getting better or worse?"

By 2004 we had bought lots of security products, and now only port 80 is the only open port (default DENY). Hackers started hacking web apps instead.

Classic ASP was easy to hack. until in 2005 when vendors started releasing safer technology frameworks (2005? We were using it in 2002)

Note: ASP.NET doesn't have XSS protection built in, unless you leave ValidateRequest on (which no-one does), as controls only sporadically escape their output.

Paul looked at Security-Assessment's old pen-test projects and compared their vulnerabilities to those run recently.

"In 2003-2005, web application developers were F$%^&* bad"

"Developers fail at anything to do with files"

But the situations hasn't got much better lately. Admin sections are still accessible, SQL injection still found, but less common, file uploads allowing directory traversal.

When developers use framework security controls, they're okay. If they use custom security code, they mess it up.

"Less vulnerabilities in 2009 resulted in a shell"

"Security only works flawlessly when it's already implemented in the framework" - when developers build their own code, they normally mess it up.

Summary: The internet is getting more secure, but we're not there yet! Only need one bug to get in to a system.

posted on Monday, July 13, 2009 9:44:40 AM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]
 Wednesday, June 17, 2009

Well, what did I learn while at Code Camp last weekend.?

  • Wellington .NET dev community is passionate and quite diverse
  • Objective C is more smalltalk-ish than I realised from previous snippets I had seen
  • Xcode IDE is less 'integrated' than Visual Studio
  • I now know more about CRM and other Microsoft solutions
  • A panel discussion (Usability or Security) can be fun when the audience participates
  • How to make my code slightly more maintainable
  • Code contracts gel with me more than Spec# did. and I like them
  • F# continues to be awesome and yet awe-inspiring
  • Sync framework looks like a good solution for occasionally connected apps, with a good set of functionality out of the box
  • And I demoed a beta IDE in a beta VM on a beta OS (VS2010 in Windows Virtual XP on Windows 7)

Sponsors are awesome!

Go Go Gadget. Karting!

I had fun at the go-karts. The winners of the team event were Simon and Bert:

posted on Wednesday, June 17, 2009 11:41:30 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]
 Wednesday, June 10, 2009

I'm looking forward to the Code Camp this weekend. We've got an interesting range of talks lined up over the two days, and I'll be doing a short talk on what's coming in Visual Studio 2010.

As well as organising the finances and food, I've organised the social event for Saturday night. It's going to be fun!

Code Camp Social Event

6:30pm, Sat 13 June @ North City Indoor Raceway 3 Raiha St, Porirua http://bit.ly/g6gLf

Food at 7pm, racing shortly after. Finish by 9pm.

A family-friendly go-kart race, with geek-against-geek action.

image

The karting is at the North City Indoor Raceway: http://www.ubd-online.co.nz/indoorkarting/ (see "The Races")

There will be a team race where each team will relay through each driver 3 times, giving 30 laps of racing per person. Everything is computer-timed to find out which team wins, and spectators are welcome to watch.

The food will be BBQ/Salad/Chips, and you can BYO drinks (I'll hopefully have some money left to bring a little along).

Karting plus food: $40
Food only: $10

Spouses and older kids are welcome to kart at the above prices, or come along just for food and cheer your team on!

Many thanks to our sponsors: Whitireia, Xero, Microsoft MVP, INETA, DTS

posted on Wednesday, June 10, 2009 10:37:39 AM (New Zealand Standard Time, UTC+12:00)  #    Comments [1]
 Thursday, May 28, 2009

I visited Napier at lunchtime today to present at the Hawkes Bay .NET User Group.

The presentation was a mixture of my earlier web security talk and the talk I gave recently on the Anti-XSS library which helps when you need to encode untrusted data.

Download File - Presentation

Subscribe to my blog: http://pageofwords.com

Cheers!

Kirk

posted on Thursday, May 28, 2009 10:56:37 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [3]
 Friday, May 22, 2009

When is it not safe to load an XML file into an XmlDocument object?

Any time the source is untrusted, it turns out:

Tom Hollander: Protecting against XML Entity Expansion attacks

That's one I haven't heard of before, and shows why every input from an untrusted source should be treated with care.

It reminds me of the zip expansion attacks that used to break mail servers 8 or so years ago:

Zip expansion attack. A large uniform file (for example 1 Gbyte of Zeros) is zipped and e-mail. AV or content filtering products attempt to unzip the attachment for checking, but are unable to do so because of lack of disc space. [ecommnet]

The old expanding file trick. What will they think of next?

Kirk

posted on Friday, May 22, 2009 8:54:43 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]
 Wednesday, May 20, 2009

While it's not always like this...

... it's true that Rod does zoom around the office (although not always on the Segway).

[Rod on the Telecom Business Hub]

Kirk

posted on Wednesday, May 20, 2009 11:53:38 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [1]
 Monday, May 18, 2009

I presented a talk at the Wellington and Auckland .NET user groups this month titled "Best Practices -  Caching". The goal of the talk was to discuss why we might need to add caching to our applications, and the way that we typically add it to each layer:

  • Client-side: reducing data flowing to the server, enable caching through expiry etc
  • ASP.NET: stashing data; page-level, fragment, IIS caching
  • Business layer: cache objects to avoid computation
  • Data layer: cache raw data from the database; identity maps
  • Database: reduce hits on disk

The difficult part when caching at any layer is invalidating the redundant data that is stored in the cache when the source data changes. It's easier depending on the type of the data:

  • Reference - shared reads (e.g. Catalog)
    • Easy to cache and distribute
  • Activity - exclusive write (e.g. Cart)
    • Can cache each user's data separately
  • Resource - shared, concurrency read/write, large number of transactions (e.g. Auction bid)
    • Caching is hard
    • DB is best source of data, with careful caching

The second half of the talk we looked at two caching technologies - memcached and Velocity.

The presentation: Caching.pdf 

Some links:

Kirk

posted on Monday, May 18, 2009 10:40:13 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [1]
 Friday, April 03, 2009

I appreciate good humour more than I appreciate politics, and most of the credit I gave to our former prime minister Helen Clark was for her sharp wit.

It's great that we have a funny guy as our prime minister in New Zealand:

Hat tip to Rod on our Xero blog

Kirk

posted on Friday, April 03, 2009 9:22:15 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]
 Wednesday, April 01, 2009

I've just been awarded Cobol Developer of the year for 2009!

It has been a great ride at Xero, from releasing our first beta little more than 2 years ago, to racking up our 6000th customer this week.

Some people doubted us for picking a VSE/ESA environment and the IBM compiler, but the support for 31 bit addressing and dynamic calls really accelerated our development of a Web 2.0 software product.

I'd like to thank all the other forward thinking members of our team for choosing and building on a great platform, and encouraging me to achieve this award.

Kirk

posted on Wednesday, April 01, 2009 10:49:07 AM (New Zealand Standard Time, UTC+12:00)  #    Comments [4]

Deleting your POP3 mailbox using telnet, since Gmail doesn't do it properly :)

 

I'm using Gmail to check and download my Paradise (ISP) email. This means I can read (almost) all of my personal email in one place.

Gmail appears to only have one option for deleting mail: "Leave a copy of retrieved messages on the server". If you set this option it immediately deletes your mail from the POP server after downloading it to Gmail, which means that you can't check it with an alternate client.

Other mail clients allow you to leave mail on your mail server for a number of days, so I normally set this to 7 days so that if I need to fire up a different client or use my ISP's mail, then I can see recent email. Gmail doesn't have this option, which means if you don't delete mail from your POP account, it will eventually fill up.

For completeness, the sequence of commands to type into telnet to delete a bunch of your mail:

> telnet pop3.paradise.net.nz 110

USER <username>  // Your POP username
PASS <password>  // Your POP password

STAT                        // Lists the number of messages (e.g. +OK 1108 19255723, which means 1108 messages)

// Then for each message
DELE 1
DELE 2
...                         // I used a spreadsheet to quickly generate a list of DELE's from 1 to 1108)

Mission accomplished. An empty POP mailbox without installing (or writing) any extra code :)

Kirk

posted on Wednesday, April 01, 2009 10:23:21 AM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]
 Thursday, March 19, 2009

Well done to the SilverStripe team for getting into the new Microsoft Web Platform installer:

image

The installer helps people get web applications up and running in a flash, and it's great to see SilverStripe alongside 9 other big-named web apps. This should be great for the initial out-of-the-box experience for their users, and for exposure to new users.

See Nigel's blog for more details.

posted on Thursday, March 19, 2009 12:10:30 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]
 Wednesday, March 18, 2009

Jeff Moser writes How .NET Regular Expressions Really Work.

I've got a soft spot for regular expressions (programming Perl can do that to you), and while I understand backtracking and greedy / lazy matching, I've never actually read the source code for a regex library before.

If you haven't either, and want to benefit from someone else's description of the 14,000 lines of .NET regular expression library, you'll enjoy this post.

Kirk

posted on Wednesday, March 18, 2009 10:31:31 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [1]
 Tuesday, March 17, 2009

Ewan Tempero, a lecturer and supervisor of mine from my days at VUW (now at Auckland Uni) is part of a survey to find out what we actually practice in software engineering, so that they can compare it to what is being researched and taught:

sefolklore.com

Fill in the survey, it will take less than 10 minutes.

(Watch out for the 'rank the following 6 statements' question -- you can only put a rank against one statement)

Kirk

posted on Tuesday, March 17, 2009 9:10:01 AM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]
 Wednesday, February 25, 2009

Encoding is "the process of transforming information from one format into another" [Wikipedia]

In the web development world, when we talk about encoding text, we are normally talking about taking some input text and making it appropriate to use in a given context. For example, taking the user's first name and last name, and making it safe to put in a <b> tag within an html page.

We care about encoding most when we take input that we don't trust from our users - if we ever display that input we have to be careful to remove any characters that may interfere with the display of our web pages, cause javascript to run, or allow other malicious actions.

This article will help you understand what encoding is, why you need to do it and how that helps prevent cross-site scripting, and give a little introduction to the AntiXSS library.

A bold example

As a running example, let's say we are letting the user enter anything they want for their name - in an input box like this on our website:

Text box to collect name from the user

We then take the text they enter and store it in our database. Later on when we display it on the web page, we wrap the text in bold tags so that it stands out:

Welcome to the website, Kirk!

In ASP.NET one way of doing this would be to put an ASP.NET label between <b> tags:

Welcome to the website, <b><asp:Label ID="NameLabel" runat="server"></asp:Label></b>!

...and then in the code behind, take the name from our database and assign it to the Text property:

User user = GetFromDatabase();

NameLabel.Text = user.Name;

Trust no-one

The problem is, we've received this name directly from your user (who of course, you shouldn't trust), and we've stored it in a column in our database (which we now can't trust), and now we can't safely display it on our website without sanitising it or making it trust-worthy.

The number one lesson I try to give in my presentations on web security is "Don't trust...". You can't trust your user, you can't trust your employees, your students, or even your mother. There is no such thing as "safe input" that you receive over the Internet, everything you receive is suspect.

(Even people who are otherwise trustworthy might not be in control of their faculties if they have spyware or are virus-infected)

Everything is fine if the user enters only ascii characters:

User enters

But what happens if the user enters some html into the input box?

The user enters html, the page layout changes.

The user is now able to change how our page looks! Indeed, they can inject HTML, script or other content directly into pages on our website!

This is known as Cross-site scripting, or XSS, and is the bane of our existence as web developers.

What went wrong?

The ASP.NET label outputs the Text directly into the HTML output of the page:

<p>
    Welcome to the website, <b><span id="NameLabel">Kirk </b><i>Jackson</i></span></b>!
</p>

The problem here is that the ASP.NET label is not encoding the text before outputting it. The text is not appropriate to use in an HTML context, as it contains characters that have meaning in HTML (namely the characters making the </b> and <i> tags).

To make the user's name safe to use in an HTML context, we need to encode the inappropriate text to be safe in an HTML context:

Kirk &lt;/b>&lt;i>Jackson&lt;/i>

HTML Encoding

HTML encoding is turning a string into a safe block of text for insertion in an HTML web page.

This means it should not use any of the special characters that are used to mark the beginning or end of tags (< and >), attribute values (") or the ampersand character on it's own (&). If those characters are left in the string, then they could be used to start or stop HTML tags and change the behaviour of our page.

To remove these characters, HTML encoding requires them to be turned into character entity references, or numeric entity references. This stops them from being treated as special characters for formatting an HTML page, and just treats them as a character to be displayed.

Original character Character Entity Reference Numeric Entity Reference
< (less-than sign) &lt; &#60;
> (greater-than sign) &gt; &#62;
" (double quote) &quot; &#34;
& (ampersand) &amp; &#38;

The above table shows a few examples of how to encode special characters. For a more complete reference, see Wikipedia or W3C.

Note that since the ampersand character is used to start an encoded character sequence, it can't be used on it's own as a regular character. This is why ampersands should be encoded as &amp; in HTML.

Once the users name is encoded, it will then be in the HTML as &lt;i> instead of <i>, which means that in the above example, italic mode won't turn on:

The users text is now encoded correctly.

The screenshot above looks a little weird, but the page now displays the text exactly as the user typed it in, without treating the users input as special HTML markup.

Attribute Encoding

Attribute encoding is turning a string into a safe block of text for use within an attribute of an HTML tag.

Attributes are the name/value pairs on a tag node in HTML (or SGML and XML, for that matter). For example, in the following HTML, the a tag has a title attribute:

<a href="foo.html" title="test">thing</a>

The title tag is displayed as a tooltip

The text inside the title attribute is used to create a tool tip when the mouse pointer hovers over the hyperlink.

This HTML contains an a tag (an anchor tag), which has two attributes set: href and title. The a tag also contains some HTML within it: the text 'thing'. The contained text must be HTML encoded if you only want text within the a tag, and the two attributes must be attribute encoded.

At a simplistic level, text is valid inside an attribute as long as it doesn't contain double quotes ("), ampersands (&) or less-than symbols (<), as the double quote would prematurely end the attribute, and the other two characters must be encoded anywhere they are used within an HTML document (except when creating tags).

To extend our earlier example, imagine the users name is used as the tooltip of a link, to pop up before they follow the link. If we naively output the users name as a title attribute without encoding it, the user could inject some additional behaviour into our page. e.g.

<a href="foo.html" title="<%= User.Name %>">thing</a>

If the user enters something malicious, for example by entering a double-quote followed by some javascript, then they have managed to inject extra HTML or javascript behaviour into our site:

User enters script into Name field

The hover for the hyperlink looks okay, but when the user clicks the link, malicious javacript can run:

Malicious javascript running

This is because the HTML that we have sent to the clients browser actually contains an onclick attribute that we didn't intend:

<a href="foo.html" title="Kirk" onclick="alert('Hi')">thing</a>

Encoding the users data before sending it to the browser would have protected us from this, and then the HTML sent would look like this:

<a href="foo.html" title="Kirk&quot; onclick=&quot;alert('Hi')">thing</a>

Which correctly displays exactly what the user entered:

Tooltip now shows complete text entered

URL Encoding

URL encoding is turning a string into a safe block of text for appending on the query string of a URL.

The original specification for HTTP URL's (RFC 1738) specifies that URLs should only include certain characters, and all others must be encoded. This is similar to the case of HTML encoding, but there is a much smaller set of characters allowed, and the way you encode them is different.

To encode characters to append to a URL, you use a percentage symbol, followed by the two-digit hex number representing that character. For example:

Original character Character Entity Reference
space %20
/ (forward slash) %2F
" (double quote) %22
? (question mark) %3F

The above table shows a few examples of how to URL encode special characters. For a more complete reference, see Brian Wilson's URL Encoding page.

We need to encode strings before appending them to a URL, to make sure that untrusted input is not able to change the URL.

For example, if our page above constructed a URL to search Google for the name of the user entered into the website, it could look like this:

Construct a search url by joining two strings together

When the user clicks the link, they will search Google for their name.

Here the naive code is just constructing a url by joining the two strings together:

User user = GetFromDatabase();

string url = "http://www.google.com/search?q=" + user.Name;

But if a name with spaces is entered, then we're generating an invalid URL:

Create a url with spaces in it

The URL is invalid because it contains an illegal character - a space that should be encoded as %20.

We could also be opening our users up to cross-site scripting bugs, because we are effectively letting them create any url they want. For example:

Create a url with ampersands in it

Here we are appending the ampersand (&) that the user entered directly to the end of the url, so rather than their text being passed to the server as the "q" parameter, we're letting them add other query string parameters (in this case, the "I'm feeling lucky!" button). The solution in this case is to encode the ampersand as %26.

The AntiXSS library

The AntiXSS library (currently at version 3.0 beta) has been built by the Microsoft ACE Security and Performance Team [ooops! By the Connected Information Security Group, sorry!]

The library provides two related functions:

  • Encoding methods to make text safe for a variety of contexts
  • An HttpHandler to automatically encode your ASP.NET controls

I'll cover the Security Runtime Engine HttpHandler in another post.

The encoding methods have been built using more robust and secure coding practices than the existing methods in the HttpUtility class of the .NET framework, so you should use them in preference when encoding your data.

public class AntiXss
{
    public static string HtmlAttributeEncode(string input);
    public static string HtmlEncode(string input);
    public static string JavaScriptEncode(string input);
    public static string UrlEncode(string input);
    public static string VisualBasicScriptEncode(string input);
    public static string XmlAttributeEncode(string input);
    public static string XmlEncode(string input);
}

You need to decide which context you're outputting text, and then choose the appropriate method to encode the text.

  • HtmlEncode - use for all HTML output, except for when you're adding text inside an attribute of a tag (e.g. use for <b>...</b>)
  • HtmlAttributeEncode - use for text that will appear inside attributes of tags (e.g. <a title="...">)
  • UrlEncode - use for text that you are appending as a value in a url query string (e.g. http://google.com/search?q=...)
  • JavascriptEncode - use when you want to put the string into a javascript variable (e.g. var foo = '...'). This method will also create the surrounding quotes.
  • VisualBasicScriptEncode - use if you're unlucky enough to be creating pages with VBScript on them
  • XmlEncode, XmlAttributeEncode - the XML equivalents of the above HTML methods

To use inline in your ASPX page, you can call the library methods directly:

<a href="foo.html" title="<%= HttpUtility.HtmlAttributeEncode(User.Name) %>">thing</a>

To use from your code-behind, decide whether your control outputs it's content as an attribute or in an html context, and then call the appropriate method:

Label1.Text = AntiXss.HtmlEncode(User.Name);

Deciding which context you're in and which encoding method to use is a major annoyance, so be sure to look at the Security Runtime Engine which does it for you. I'll write more about that in a future blog post, so please subscribe to my RSS.

Hopefully this article has helped you understand what encoding is; why you need to encode untrusted input and how that helps prevent cross-site scripting; and has given a little intro to the AntiXSS library.

Kirk

posted on Wednesday, February 25, 2009 3:57:16 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [1]
 Saturday, February 21, 2009


http://creativefreedom.org.nz/blackout.html

Join The New Zealand Internet Blackout to protest against the Guilt Upon Accusation law 'Section 92A' that calls for internet disconnection based on accusations of copyright infringement without a trial and without any evidence held up to court scrutiny. This is due to come into effect on February 28th unless immediate action is taken by the National Party

It's not about downloading illegal content. Copyright laws exist for a reason, and protect creators of content (and even users of GPL software). It's about laws that have been drafted foolishly and that reduce our rights.

Kirk

posted on Saturday, February 21, 2009 12:41:12 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]
 Wednesday, February 18, 2009

Developer survey from Microsoft. Each answer you put in displays a different cartoon reflecting your choice. Fill in the survey here.

image

posted on Wednesday, February 18, 2009 9:34:37 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]

I'll post the slides from my AntiXSS talk later, once I've cleaned them up. In the meantime, here's a couple of links:

I will post the slides later.

Kirk

posted on Wednesday, February 18, 2009 9:20:27 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]
 Friday, February 13, 2009

The twitter "don't click" messages are spreading like wildfire. It's a relatively benign form of clickjacking (analysis here) that tricks you into click a button when actually you're click on a hidden button on the twitter site that posts a tweet.

I've talked about clickjacking in Wellington, Auckland, Christchurch and Nelson, and while I don't know of a fool-proof way to protect yourself against click-jacking, you should do what twitter have done (and what I suggested at those talks) and include some frame-busting javascript at the top of every page in your site. Details are here: Framebusting in Javascript

Frame-busting works by unwrapping your site from being hosted inside an iframe. It won't stop all click-jacking attacks, and it won't protect all users, but like many security mitigations it's about layering several 90% solutions on top of each other to protect your users and your websites.

Kirk

posted on Friday, February 13, 2009 9:02:39 AM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]
 Thursday, February 12, 2009

It was a nice sunny day in Nelson yesterday, and it was nice to have a little look at the scenery afterwards (thanks, Daniel!).

I presented a similar "Overcoming your web insecurity" talk that I gave in Auckland recently [slides], and it was good fun diving in to some depth in the extra time we had... hopefully I managed to scare some people!

 

Next Wednesday at the Wellington .NET Users Group, Owen Evans (who also works at Xero) and I will be presenting two sessions.

Owen will be doing a LINQ Refresher to get us up to speed with the LINQ syntax for selecting, grouping, where-ing and more.

I will be talking about the Anti-XSS library, which is now in beta. The library is pretty cool and helps a lot with encoding data before it ends up on your website :)

More details of the event are here: LINQ Refresher, Anti-XSS and SDE Libraries

 

Hope to see you on Wednesday!

Kirk

posted on Thursday, February 12, 2009 10:09:55 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]
 Friday, February 06, 2009

Oisín Grehan has a good list of the new cmdlets in PowerShell 2 (currently in CTP3 and the Windows 7 beta):

http://www.nivot.org/2009/02/04/DifferencesBetweenPowerShell10RTMAndPowershell20CTP3Win7Beta.aspx

It's cool having a list of all 106 new cmdlets, including such useful ones as:

  • Test-Connection (ping)
  • ConvertFrom/To-CSV
  • Start/Stop/etc Jobs in the background
  • Get-Random (useful for drawing prize winners at user groups!)
  • ConvertTo-Xml

PowerShell 2 has a bunch of cool new features, and feels like it's getting real close now :)

Kirk

posted on Friday, February 06, 2009 9:32:39 AM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]

I've got the afternoon off work this Wednesday 11 Feb, and am popping over to Nelson to present on web security (details below).

I hope to see you there!

Kirk

Daniel Ballinger wrote:
> Hi All,
>
> Kirk Jackson from the Wellington .NET user group will be in town on
> Wednesday the 11th of February and is giving a presentation.
>
> Title:
> Overcoming your web insecurity
>
> Abstract:

> As an ASP.NET developer, there are many things to think about while
> developing your web application. Come along to understand the
> fundamentals of developing a secure web application, and learn how to
> protect your site against the dangers of cross-site scripting, cross
> domain request forging and click-jacking.
>
> This session will be suitable for all levels of experience, and
> developers who use other web development platforms such as PHP or Java.
>

> Presenter:
> Kirk Jackson
>
> Useful links:
> http://pageofwords.com - Kirk's blog
>
> http://mscommunities.net.nz/ - The home of Microsoft communities in New Zealand
>
> When:
> Wednesday 11th February 2009
> Gather at 2:50 pm, starting at 3:00 pm.
>
> Approximately 1 hour 15 minutes plus pizza afterward.
>
> Where:
> FuseIT Ltd,
> Ground Floor,
> 7 Forests Rd,
> Stoke,
> Nelson
>
> (Off Nayland Rd and behind Carters)
> http://local.live.com/default.aspx?v=2&cp=-41.299774~173.236231&style=r&lvl=16&alt=-1000
> or
> http://maps.google.com/?ie=UTF8&om=1&z=17&ll=-41.299774,173.236231&spn=0.005239,0.010042&t=h
>
> If you are parking on site, please use the parks marked FuseIT that
> are at the back of the site.
>
> Giveaways:
> A single copy Microsoft Office 2007 Professional
>
> Catering: Pizza & Drinks
>
> Door Charge: Free
>
>
> RSVP to me if you are going to attend so I can guesstimate the food
> and drink requirements.
>
> However, feel free to turn up on the day though if you can't commit at
> the moment.
>
> Please feel free to invite anyone who may be interested in attending.
>
>
> Cheers,
> Daniel
>
> Daniel Ballinger
> Developer
> FuseIT ™

http://www.fishofprey.com/

posted on Friday, February 06, 2009 9:17:38 AM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]
 Wednesday, January 28, 2009

A story told through links to web2.0 sites that you know and love: http://blueful.com/

A clever way to tell a story, although it's a bit weird not having the urls hyperlinked.

(via the O'Reilly Radar)

posted on Wednesday, January 28, 2009 8:48:13 AM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]
 Wednesday, January 07, 2009

Wired's Threat Level blog compiles their list of the top 7 technology-aided crimes of 2008: The Seven Best Capers of 2008

The list is quite a humorous read.

Some of the crimes are caused by the silliness of the affected business, so it almost seems mean to prosecute the criminal :)

posted on Wednesday, January 07, 2009 12:29:28 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]