Search This Blog


What have you downloaded from BitTorrent?

Thursday, July 10, 2014

(T) Filtering or trimming Windows logs the right way, NOT by Event ID

Have you ever worked with Windows logs and enabled all or a lot of auditing items and feel there are way too many events or noise?

Ever enabled 'Filter Platform Policy Change' to monitor the Windows Firewall connection? This auditing option will add Event ID’s 5156 and 5158 to your logs and quickly be in your Top 10 of all events generated. If you enable success for 'Process Creation' you will get Event ID’s 4688 and 4689. These four Event ID’s will probably be your Top 4 events generated.

Enabling these two auditing items will add a ton of events to your logs. While stored on a systems local disk you won't notice a thing, forwarding them to a Log Management solution you will find they add up and impact disk space and licensing. But they are some of the greatest events Windows has to offer, so use them!

Jedi Tip: Ever wanted to know what IP's and ports a Windows application is using? Maybe to make a change on your enterprise firewall? Use 'Filter Platform Policy Change - success' to see all inbound and outbound connections to and from your Windows Server or Workstation. You can even use this data to refine your Windows a Firewall rules for allowed IP's to an application like a security camera for example or remote access, see my last Blog entry for tips on this one HERE.

So how do you enable a policy item, start collecting log data yet filter the unwanted noise out? Most people do it by disabling auditing of the two items above or excluding by Event ID, which is a terrible way to filter or trim logs. Unless of course the Event ID is truly worthless and none of the events in that ID are useful to you or your admins or dev folks.

If you filter out or disable Windows firewall auditing (Event ID’s 5156 and 5158) for example; then you can't see all the inbound connections to your systems, remote connections by remote IP to the system, track users surfing to IP's, or outbound malware C&C requests. You would be forced to do this at the network layer where you cannot easily hone in on a host and what process is using an IP you are investigating.

If you filter out or disable Process Creation auditing (Event ID’s 4688 and 4689) for example; then you can't see the processes that have been launched on a system, or see what process called another process like CMD.exe calling malware.exe.

Do you need to see or keep ALL of the events of the ID's just discussed? No, you can look at Process Names and Application Names that you deem normal noise and exclude them versus eliminating by Event ID. Google Chrome Update is incredibly noisy log wise, yet probably not needed for InfoSec or forensic investigations. You could toss out GoogleUpdate.exe or Splunk*.exe and reduce your events of the four Event ID's mentioned by 50% give or take saving disk and log management licensing. The image at the top is exactly this filter before and after.

If you are wanting to try the Free version of Splunk at home or in your lab, then reducing events by tossing them out will save on the 500mb per day Splunk eval license restricts you to. Each log solution will have different ways to filter items out or blacklist them from being collected, but never ever do it by Event ID as you can or will lose valuable log data. Unless you are certain the Event ID and all its' events are worthless.

Read the definitive "Windows Logging Cheet Sheet" I put together for Windows logging here for tips on what to enable, configure, gather and harvest.

#InfoSec #HackerHurricane #LoggingDoesntSuck

Monday, June 30, 2014

(T) How to use the Windows Firewall behind a NAT router

Ever wanted to open up a port on your home firewall, but restrict it more than a NAT router allows with raw port 1234 to IP

Restrict it by an actual application, or 1 or 2 remote addresses, not to the whole system from anywhere on the Internet? Or easily change it because of DHCP changes on either local or remote systems? Or to allow a cloud service to send data back to one of your systems? Or allow a Web a console you want to remote into from work to check your Security System?

While playing with Splunk I found the logic of the Windows Firewall behind a NAT router changes things a bit.

The Windows Firewall has 3 Zones, you usually see this PopUp from your browser when you join a new Wireless network on your laptop for example. You will see 'Public' and 'Private' or 'Home' which is also 'Private', there is also 'Domain' for AD attached systems which we generally don't have at home.

When you are behind a NAT router you can ignore or disable everything but 'Private'. Once you open port 1234 to IP on your home router via Port Forwarding, Gaming or whatever your router calls it, all traffic to the a Windows Firewall is seen as 'Private' or inside your network. You can test this by disabling a 'Public' rule you have for a Remote Access for example and it will still work, because it uses the 'Private' rule once NAT is involved.

So everything you will do will be an Inbound Rules and 'Private'. Notice in the image above I have VNC Server installed on my test system and it has 'Public Disabled'. Even though I remote to it over the Internet from a public address, once NAT port forwarding takes over it becomes a 'Private' rule.

Also notice that I have a Splunk Web rule that is also 'Private'. All I have to do now is craft my NAT router to pass any port range I want to the IP I want and then use the Windows Firewall to further restrict it by the remote IP's allowed to the computer or specific application. This allows me to refine my port hole in my NAT router to only the systems I want to remote from and to only the actual application (VNC Server.exe) on that port.

If your home IP changes due to DHCP renewals, then use a Dynamic DNS provider so you can refer to it by name instead of IP. This will require you to run a small utility on your system to send any IP changes to your Dynamic DNS provider.

Play with it and let me know any other tricks you might use.

#InfoSec #HackerHurricane

Monday, March 3, 2014

(T) A look at WHY the now infamous Credit Card breaches should have been detected and how YOU can detect these types of attacks

We are all too familiar with the Credit Card breaches that hit Target, Neiman Marcus, Michaels, Marriott, Holiday Inn, Sheraton, Westin, Renaissance and Radisson, possibly Sears and other retailers yet to be named.

Recently some information on the details of the malware has been released by iSight, McAfee, X-Force and others.  I took a look at the details around BlackPoS and Kaptoxa as I could not believe organizations the size of Target, Neiman Marcus, and Michaels and not mention several hotels were unable to detect such an attack.  So what would it take to “Detect and Respond” to such an attack?

First off, this malware was neither sophisticated nor overly stealthy.  It looks and acts like most malware and APT you would find, see or analyze.  As a part of my “Malware Management Framework program I review malware such as this in order to see if I could or would detect such an attack, similar attack and compare it to what I already see or know in order to tweak my detection capabilities.

First, let’s look at the diagram McAfee had in their report:
As we can see there are multiple Point-of-Sale (PoS) systems connected to the ‘Exfiltrator’ system.  This system apparently sent the CC files via FTP to a compromised Internet server that the criminals then harvested the credit card data from.  By the way, FTP is often used to send transaction reports off the PoS terminals to a central system, so FTP may be normal traffic to many.

It is not clear from the report(s); but one would also assume, for sake of having a starting point that this ‘Exfiltrator’ system, let’s call it ’Patient 0’ is where the attack against the PoS systems originated once the malwarians were in.  We know from Neiman Marcus the malware was deleted every day and reinstalled after a PoS reboot.  This would lend to the fact there is a ‘Patient 0’ system that re-infected the PoS systems as they came back up.  This means that there would be some type of traffic from ‘Patient 0’ to each PoS system and a LOT of it if this occurred daily.  According to reports, almost 60,000 events (the articles say alerts, but I don’t believe that).

With these assumptions and details we can build a pattern of noise that would have been made by such an attack and what we should have, or could do to detect and respond to such an event.  So let’s take a look at how to detect and respond to this type of attack.  I will use Windows 7 versus the Windows XP since most of you are running the latest Windows Operating System (hopefully), but everything discussed still applies to Windows XP, just with a different Event ID.  I will be referencing the latest EventID’s, someone can convert them to XP/Legacy ID’s and I will gladly post them.


1.  FTP – The FTP traffic that went from the PoS system to ‘Patient 0’, or visa-versa should have been detected.  Even if we assume FTP is normal traffic from the PoS, or to the PoS from a central system and even if the proper central FTP system was also ‘Patient 0’ this should have been detected.
a.  Transaction logs are scheduled and deterministic.  It is highly unlikely that the malwarians used the exact same schedule and the exact same central system.
b.  Net Flow should have been used to detect this type of traffic as FTP is in the clear, the data sent by the malware was not, it was encrypted and would have looked like gibberish.
c.  The traffic outbound from ‘Patient 0’ should have been caught as well as it was going to an untrusted source.  All your FTP servers should have known IP’s that it would connect with and any variance alerted to.
d.  Any FTP server should have been configured to collect all logins and activity.  ‘Patient 0’ should have been no different.  This system received FTP transfers from many PoS systems and this behavior should have been detected as abnormal outside the normal transaction log upload time

2.  Port 445 – Windows communicates over port 445 for shares that are mapped from one Windows system to another.  The malware mapped drive “S:” to another Windows system was used by the malwarians.  This traffic should have also been known in a static environment and thus any variance alerted on.

3.  Logs – Windows logs if configured should have seen all kinds of noise from this malware.  If auditing was enabled on the PoS systems or ‘Patient 0’ then noise would have been made, a LOT of it!  Collection of known malicious behavior should have been detected.  Windows 7 Advanced Auditing should be set to ‘success and failure’ for many items.  See another future Blog article for what to set, or better yet come to BSides Austin for the Windows Logging Workshop.  For Windows XP logs, all audit logging should have been set to ‘success and failure’ and collected locally or better yet sent off to a syslog server or SIEM solution.  For the sake of argument and the sheer noise it produces, let’s assume the Windows Firewall is off, so no Packet Filtering logging is enabled.

a.  Execution of the FTP client should have been detected.  Event ID 4688 triggering on ‘ftp.exe’ should be part of malicious tools and activity monitoring.

b.  Connection between systems should have also been detected.  Event ID 4688 triggering on ‘net.exe’ should be part of malicious tools and activity monitoring.

c.  The PoS system when connected to a share should have also been detected.  Event ID 4624 triggering a ‘Logon Type 3’ network logon should be part of malicious network activity monitoring.


d.  The system the PoS connected to should have had a share connection detected.  Event ID 5140 would list the share ‘Test’ was connected and what IP address did the connection and should be part of malicious network activity monitoring.


e.  The system the PoS connected to should have had a share connection that disconnected detected.  Event ID 4634 would list the ‘Logon Type 3’ was ‘logged off’ and should be part of malicious network activity monitoring.


4.  CMD Shell executed – A command shell was used on ‘Patient 0’ as well as each PoS system.
a.  Event ID 4688 triggering on ‘cmd.exe’ should be part of malicious tools and activity monitoring.


5.  PSExec executed – The use of PSExec is very suspicious and should not be used by anyone except an administrator.

a.  Event ID 7045 triggering on ‘PSEXECSVC.EXE’ or ‘PSEXEC.EXE’ should be part of malicious tools and activity monitoring.

6.  The Malware – The malware files that were used should also have been 
detected in many ways.  The logs at a minimum could/would detect that something malicious was occurring.

a.  Event ID 7045 triggering on ‘Bladelogic.exe’ or ‘POSWDS.EXE’ (dum.exe) should be part of malicious tools and activity monitoring.


b.  Event ID 4688 & 4689 should be triggering as the new unknown application started and stopped with the names ‘Bladelogic.exe’ or ‘POSWDS.EXE’ (dum.exe) should be part of malicious tools and activity monitoring.


c.  File Auditing is not heavily used by people, but in static environments like a PoS system or file server, or every administrator system you have, it would be easy to setup File Auditing on a handful of directories, or just exclude the noisy ones.  New files being added to \System32 would have triggered the files dropping and the credit card data file ‘winxml.dll’ if File Auditing had been enabled.  Add File Auditing to ‘\System32’, ‘\Drivers’, ‘\WBEM’, ‘\Fonts’, and ‘\twain_32’ and set to audit ‘Create File’ and ‘Create Folder’ for success.


d.  Event Id 4663 would have been triggered in the logs for ‘access an object – WriteData (or AddFile)’ the new files that were added by the malware and should be part of malicious tools and activity monitoring.



7.  User Accounts – There is a fair amount of discussion around the accounts used to access the PoS and FTP system(s).  Any good InfoSec program should know what accounts are doing, especially if they are administrative accounts.  Service accounts and administrator accounts have a behavior that is fairly normal.  It is unlikely that the malwarians used the credentials ‘Best1_user’ account in normal ways.  A more likely scenario is that the monitoring of accounts was not being performed.  Successful logins for accounts will tell you more than failed attempts as a compromised account will not generate failed attempts.  The credentials may have been sniffed, cracked or key logged and then used successfully, maybe even ‘a user account was created’ which is ‘Event ID 4720’.

a.  Event Id 4624 would have been triggered in the logs for a ‘New Logon’ and should be part of malicious tools and activity monitoring.

Logging won't catch everything, especially since it must be configured, but there are so many log events I was able to reproduce that it is almost laughable that Target, Neiman Marcus, Michael's and others were unable, or unwilling to detect all this nefarious noise.  Being PCI compliant is not the same as practicing PCI compliance daily.  Compliance is one of the reasons these large breaches occur (IMHO) as we are so busy chasing and filling in compliance paperwork that we fail to actually practice REAL SECURITY.

Well Target, Neiman Marcus, Michael's and others.  I just showed you and many others "HOW", so step up and implement Malware Management and do what many of us already know.

If you are not logging it, you are NOT doing it right.

Other well-known security tools – many well known security tools would have detected the files dropping onto a PoS or Windows system by default.  Tripwire for example would see all the new files in \System32.  Custom configuration management tools would also detect new files in key places.  Carbon Black would have seen the files, processes, Registry keys and command line details from the malware.  The Windows Logging Service (WLS) would have caught the new processes and command line syntax used to trigger the new processes as well as the service being added.  I have only named a few security solutions... No Anti-Virus software would not catch anything like this until long after it was discovered.

If you collected the logs locally and ran scripts to gather log data or used a simple syslog server to grep through the logs, you could have seen this activity.  Of course a robust Log management solution like Splunk or one of the many SIEM solutions would have been another great and recommended method to detect this behavior “IF” properly configured.  Most larger organizations that have compliance requirements like PCI and are required to use an advanced logging solution.

Of course there are many more things we could do to adequately detect and respond to security incidents before they turn into a large “Event” that may have you preparing for “Breach Notification”.  So prepare now and integrate the "MalwareManagement Framework" into your Information security program before it is too late.  

Don’t be a Target!

Want to learn more?  Come to BSides Austin 2014 and take the 4 hour “Windows Logging Workshop”.  Of course watch my Blog for more on Blue Team Detection and Response techniques.


McAfee Blog:
McAfee Report:
IBM X-Force Blog:
Brian Krebs – Krebs on Security Blog:

Thursday, February 27, 2014

If you can't detect the now infamous KAPTOXA/BlackPOS malware, boy is your InfoSec program in trouble

So we are up to 20 companies affected by the KAPTOXA/BlackPOS malware. The question is did anyone detect this easily detectable event?

"No antivirus software would have stopped the malware that attacked Neiman Marcus’ card-processing network, because it was rewritten to target the company, Kingston said. “It was very specifically designed for an attack on our systems,” he said."

First off, what is wrong with this statement? Mr. Kingston, if you are relying on Anti-Virus to catch an attack such as this APT, your Information Security Program is failing you. AV should never be expected to detect or prevent advanced attacks such as this, it is foolish to think AV would and I have news for you, AV is not designed for this type of threat... Just so you know.

Malware Management anyone?
Kingston also stated that the malware had "sophisticated features making it difficult to detect". Let's look at what we know about the KAPTOXA/BlackPOS malware.

1. We will give up the fact that an endpoint was compromised and it got onto one system. I always say "Give up the Endpoint and detect things from there". Relying on, or telling Congress the malware had a ZERO (0%) detection rate by AV is well, Duh.. All APT has a 0% detection rate. That is what makes it APT and "sophisticated".

2. The malware was a memory only resident program, or was it?  K3wl, it was good malware, as expected and your Security program should be designed for such malware and threats, you know, the stuff that has a 0% detection rate by AV.

3. The malware wrote the encrypted bits to a file located at "C:\Windows\System32\Winxml.dll. OK, any NEW file that is added to \System32 on your static core systems should be reviewed as a part of a 'Malware Management' program. Might I point out this file was NOT a .DLL, rather it was a text file? Can you say look for 'MZ' in files that claim to be executables that are not, and 'MZ' in files with non-executable extensions. TripWire BTW would have detected this file over and over again as it did system checks due to the changes, so would many other security solutions. Nope, AV would not have caught this.

4. The malware ran a script that pushed ' Winxml.dll' to a remote system. Yup, lots of connections to one system from all your POS systems... Really, that is not odd? Windows logs will show this connection, but guessing your IT and InfoSec folks did not configure this, nor were they looking for this condition. Oh yeah.. Cuz you were relying on AV.  Learn what Log Management is all about or SIEM as sales folk like to call it. Got Splunk?

5.  The location and file name for the collection point was "C:\Windows\twain_32".  Why would so many systems write to the Twain_32 directory?  This is used for scanners.  Do you have scanners on your PoS systems?

6. The account used to do all of this was behaving in a way that was probably not normal for this account. So you don't or can't monitor for administrative accounts being used outside their normal operation? No need, we have AV! Again, Log Management and Account Management would be prudent.

7. Might I point out that Windows logs will also capture the process being launched. Both CMD.exe, PSExec and FTP were used in this so called sophisticated attack. These events are captured in Windows logs, but guessing Process execution (EventID 4688 or 592 in XP) was not set to capture 'Success'. Carbon Black and the Windows Logging Service (WLS) agent would have caught this command line activity as well.

c:\windows\system32\cmd.exe, c:\windows\system32\cmd.exe /c psexec /accepteula \\<EPOS_IPaddr> -u <username> -p <password> cmd /c “taskkill /im bladelogic.exe /f”
c:\windows\system32\cmd.exe, c:\windows\system32\cmd.exe /c psexec /accepteula \\        <EPOS_IPaddr> -u <username> -p <password> -d bladelogic                                         
c:\windows\system32\cmd.exe, c:\windows\system32\cmd.exe /c move \\                              <EPOS_IPaddr>\nt\twain_32a.dll c:\program files\xxxxx\xxxxx\temp\data_2014_1_16_15_30.txt

c:\windows\system32\cmd.exe, c:\windows\system32\cmd.exe /c ftp -s:c:\program files\xxxxx\xxxxx\temp\cmd.txt

8. PSExec installs as a Windows service. Logs also capture when a NEW Service is launched. So let me guess, your folks did not look for EventID 7045 either, or EventID 4697 for XP, a Security 101 item to look for in a good Log Management Program. In the quantities seen with Target and Neiman Marcus, this is really a DUH moment.  Oh wait... maybe the fact that Neiman Marcus missed almost 60,000 alerts is the 'smack your forehead moment'.

"The 59,746 alerts set off by the malware indicated “suspicious behavior” and may have been interpreted as false positives associated with legitimate software. The report, prepared for the retailer by consultancy Protiviti, doesn’t specify why the alerts weren’t investigated."

9. Network traffic, often called NetFlow would have seen odd traffic in a static PoS environment and the SMB (port 445) and FTP traffic would have set off someones spider sense if in fact they were doing what a good InfoSec program does, look for nefarious behavior. Baseline what is normal and detect what is not.

It is clear to me the memory component was sophisticated, it grab credit card numbers and encrypted them on disk; but that is about all that was sophisticated about this malware, oh, and the fact they encrypted the stolen CC numbers is just ironic.

We need to quit chasing Compliance and start doing REAL Security Engineering to detect and catch these types of attacks. All the data I, or many others in our field was present and we would have caught this long before the massive breaches they are.

So Mr. Kingston (Neiman Marcus) and Mr. Mulligan (Target) your Information Security programs, though expensive, do little to stop the real threats facing all of us today.  You might consider hiring InfoSec people who know how to defend and spend less time on Compliance and more on Detection and Response.

#InfoSec #HackerHurricane #KAPTOXA #BlackPOS

Saturday, February 15, 2014

(N) Target warned about risk to PoS due to new Malware months before breach

It is being reported that Target was warned by their Security Staff from reports they received that new malware was targeting PoS systems. Target was in the midst of upgrading their PoS at the time and brushed off the warning. I wonder if the person who blew this warning off still works there?

Article that Target knew of risk

If this is true, and most likely is, then Target Security and/or their Corporate Security Mentality is even more ineffective than we already believe. Business in most corporations accept risk when handed reports of various 'we are vulnerable to XYZ', it's normal business practices of risk acceptance. What InfoSec professionals often can't answer is the question that is always asked..."What is the probability and what would the impact be?". Yeah yeah there are formulas for this, but as we can all guess they would have NEVER calculated out to this amount of loss and cost, or if it was calculated out, management would never believe the numbers or costs. I also think Target InfoSec has no idea how to defend their network or what they needed to actually defend such a malware impact. Maybe they do and maybe they gave 'up yours' management a budget and purchase request, but not likely.

 The Malware Management Framework
The Malware Management Framework

This is why I promote the 'Malware Management Framework'. We no longer have a choice, we MUST manage malware threats just like we manage vulnerabilities. In the case of Target or any other company that was notified of the potential PoS malware, their InfoSec team should have analyzed the data about the malware and potential impact to their own systems and then determined if they could have detected and responded to such an event. If the answer was 'NO', then IT/InfoSec management should have made a statement that was nothing short of "We must address this or the worst WILL happen, if we don't then I/We can no longer work for you". This is where InfoSec must move towards, putting our foot down when such obvious vulnerabilities threaten our companies, or find another job. Face it, if you don't, your probably going to get thrown under the bus anyways.

Every company should do what I call a "Detect and Response Assessment". This type of assessment tests the exact kind of impact that Target was facing. Take a host, assume it is compromised, give us admin credentials and let us see what we can do (non destructive), how far we can get and what we can access. Your goal.. Detect what was done, touched or the behavior I had during the test. This is not a PenTest, this is a faster, cheaper much more effective test of your Detection and Response abilities once a host is compromised, which we all know is inevitable. You WILL get compromised, how fast you detect the compromise needs to be "The New Normal" for InfoSec programs moving forward.

Ask me how, we will discuss it at BSidesAustin 2014.

#InfoSec #HackerHurricane

Tuesday, January 21, 2014

How to detect a CryptoLocker type attack - FAST

There is a new pest for InfoSec to fight and it is called Ransomware. Though not new, it has become far more invasive and destructive to anyone that has experienced it.

The success of CryptoLocker has spawned new variants (Prison Locker Power Locker, etc..) that will further spread the terror to companies, people, government agencies, police departments and anyone else.  It will only get worse as CryptoLocker proved there is SERIOUS money to be made with this type of attack.

So what do we do? How do we defend against it? How do we detect such annoying and destructive attacks?  How do we know which directories and data to restore?

Anyone who hears my presentations and soapbox statements and discussions, I feel Prevention is a dead idea. If you still think you can prevent all this hacker stuff, think again. They have proved over and over again that the Bad Actors win and the Good Actors lose with there tails between their legs in defeat or worse, fired.  Just ask Target, Neiman Marcus, or the other 6 others yet to be named retailers that have also fallen to the Black PoS hack.

CryptoLocker now has 3 known variants and was re-written from C++ to C# to improve the tool.  This is bad news for InfoSec Blue Teamer's like myself.  We must evolve faster and quicker than the malwarians update, morph or create new variants.

Or do we?

I am an advocate of Detect and Respond and less on Compliance and Prevention. I personally believe Compliance is why the hackers are winning and making a fortune. We chase compliance audits, the paperwork, the check boxes and in the process fail to do Real Security. You can't prevent invention, it's the mother of necessity and can't stop clever people trying to compromise your systems, unless of course you just disconnect from the Internet...

I often tell people, if you are not logging everything, you are doing it wrong.

So how do you deal with this type of destructive malware attack?  You can block email attachments, but that won't stop them as they offer up scripting on compromised websites that will execute the script to call the loader to download and install the malware via the browser.

So what do we do?

Detect and Response is the only answer!  You can detect a CryptoLocker type event pretty easily actually, but few actually practice this particular type of defense.  'Times are a changin' and so should you, so here is how you can detect a CryptoLocker type attack.

Logging to the rescue - View of a typical CryptoLocker event.  Notice how quiet it is to the left of the event.

Step 1 - Force Advanced Auditing

The first thing you must do is force your clients to use the new Advanced Auditing features of Windows Vista, Windows Server 2008 or later operating systems.  You will find this setting under Security Options on your local system or Group Policy.  Don't worry, for older XP and Server 2003 and earlier systems, the older auditing works fine for this event.

Once you have enabled this setting, you now can set the new Advanced Audit Policies in Windows Vista, Server 2008 or later operating systems.

Step 2- Enable Object Access - File System to SUCCESS

You will only need success for this setting as you want to know which files were changed, not that failed.

This will now enable Event ID = 4663 which will add an event in the Security Log anytime a file is deleted or written to.

Now you have to decide what directories you are going to audit this for.  Unfortunately this must be enabled on every parent directory you want to monitor on the local system, manually or by scripts.  Sorry no GPO for this that I am aware of.  This means every users "My Documents" folder, data drives or file server shares.  This is a local system setting (on the actual computer) so you will need to visit each system remotely or in person, or use an automated way to enable File Auditing for specific directories.

Step 3 - Enable File Auditing for a directory you want/need to monitor.

On the local system you will navigate to the parent directory and add auditing.  Here is how to do it manually:

1.  Select a folder
3.  Edit - ADD - EVERYONE - OK

It will bring you to the following screen:

4.  Select - "This folder, subfolders and files" (the default)
5.  Check - Create files / write data
6.  Check - Create folders / append data
7.  Check - Change permissions
8.  Check - Take ownership
* deselect everything else unless you have already set something and need it

When you select "OK" and apply, it will grind through all the folders and files enabling the file auditing.  This places minimal burden on the file server with today's systems, but by all means test it yourself.

Now when a CryptoLocker type event occurs your logs will capture the event.  If you are using a Logging solution like Splunk (my personal and professional recommendation) you can see the events and setup alerts to trigger when a threshold outside the norm of your users is reached.  I suggest "> 250 events per hour" as a place to start and adjust accordingly to your unique environment.

From the graph above and the view below you can see 3600+ events occurred and in my case an email and SMS message alerted me to the event allowing me to take immediate action.

In addition, the results can tell you what user (WHO) and what system (WHERE) along with all the files (WHAT) that were affected and therefor what to restore.

Blue Team Defender doesn't get any better than this!

Try it out on a test system and use Splunk's FREE Splunk Storm cloud offering to tweak you results and get to know Splunk.

This detector also works for any current or disgruntled employee that mass deletes or changes files.  Once you enable auditing on servers with large amounts of data and lots of users that can access and potentially alter it, a CryptoLocker type event can cripple your organization or at a minimum create a minor to serious outage.


#InfoSec #HackerHurricane #Splunk #CryptoLocker

Tuesday, December 31, 2013

(humor) The NSA is coming to town

Year end humor for everyone.. 'The NSA is coming to town' spoof on Christmas classic.

NSA Christmas song video

Happy New Year everyone!!!!

2014 is year 5 for BSidesAustin, Mar 20-21

#InfoSec #HackerHurricane

Tuesday, November 19, 2013

Austin ISSA Malware Discovery training a HUGE success

Last Friday we held a Malware Discovery training "From Joe to Pro, how to discover malware in your environment" for the local Austin ISSA chapter.

For an all day event it went pretty quick from my perspective being the trainer, but the feedback was GREAT! We received so many great comments we will be holding another training event just before BSides Austin 2014! March 19th.

In addition we have been asked to hold the training in Dallas Jan 31st for our local NAISG, ISSA and infraGard folks and other invited guests.

What made the training really cool, was the lab infrastructure that were graciously sponsored by Rackspace! These bad boys made the training smooth and the exercises fast! How fast you ask? Well in our development of the labs we were using an Amazon AWS Windows 2008 R2 server and the Hash_Master scans took roughly 21 mins on the AWS Server. On the sponsored Rackspace servers it took.. Wait for it....... 5 mins! Yup, five whole mins to scan the entire disk. Since we had to do this 3 times, it was an impressive improvement.

Here is the screenshot of our configuration from the Virginia region.

So cloud providers are not created equal and Rackspace has my appreciation for their performance, ease of use and all around awesomeness!

So if you are in Dallas Jan 31st, sign up for the training and come see how to discover malware like a Pro!

For more information, visit our Training page at:

Mi2 Security Training page

#InfoSec #HackerHurricane #malware

Friday, November 8, 2013

Like natives, InfoSec needs to become more hunters, less gatherers

Today we are faced with an ever increasing threat of advanced malware and the attacks associated with it. Compliance has created a gathering mentality in Information Security and it is no longer adequate to defend our tribe.

We must move more towards a hunters mentality and seek out the bad stuff in order to protect our tribe. We must detect and respond to the threats and seek them out because they are sneaky prey looking to take your goods while you gather data, stats, reports and check compliance reports for auditors.

Be an InfoSec Hunter and less a gatherer.

#InfoSec #HackerHurricane

Friday, November 1, 2013

(O) E&Y Poll states 96% of organizations are not prepared for a Cyber attack.. Hmmmm

This is an article I have to render an opinion on as it is a great example of 'What the heck have you been working on all these years?'

The Ernst & Young article may be found here discussed on Naked Security:

65% of larger corps stated 'Financial' as a reason they are unprepared for a large Cyber event and 71% of small Orgs under $10million.  So let me get this straight, you have staff, you have bought many tools and most likely since this is an E&Y poll, you follow some sort of compliance framework.

In Wendy Nathers talk at LasCon in Austin she discussed the results of a poll that she asked industry experts to pick the technologies if starting from scratch for a 1000 person company... What did the list look like?  I shouted out "PCI", and the next slide said... PCI.  

I was even bold enough to state that I didn't need all that technology to practice "real security," that myself and another qualified InfoSec pro could do it with a few tools, if exploited properly.  Of course someone pointed out that I would never pass an audit and he is correct.  As a former State of Texas InfoSec resource I understand compliance all too well and years at HP dealing with SOX, PCI, HIPAA, ISO and others, I understand too much how compliance is a time sucking, resource pig that does not achieve what we really need to secure our companies and nation.

So why are so many not prepared for a cyber attack?  In doing many presentations I ask the following question, "How many are confident their environment is malware free, or once you find malware that the system is malware free?"  How many hands do we get?  0-1 per Preso!

Why is InfoSec so broken, or lack confidence?  I blame compliance.  I have stated compliance does not equal security as too often it is achieved by an auditor saying "Check, you pass".   There is no real evaluation of how you are actually doing at security defense.  Many say get Penetration Testing regularly to test your defenses.  I say "Phooey" to that as well, it proves little that your defenses are good enough.

Most Pen Testers I know will find a way in or fool a person to 'Click on That', just look at Trustwave's report on hacking a reporter who asked them too and knew they were coming!  There is merit in Pen Testing, but I feel most people, say 96% will fail the Pen Test.  Why? Because the way we currently think about Information Security, in that compliance frameworks like Implementing PCI will make you secure enough, but people, almost everyone is getting popped and they have some basic security framework in place.

"Real Security" is a dirty in the trenches kind of work.  HackerHuntress stated people didn't like Blue Team jobs because it is "hard" and I said "No it's not"...  We talked some and agreed in the end it is management and lack of trained staff that can do what I and others I know that are complete defenders can do.  Maybe we just don't know how, or lack confidence to defend all that is good.

We don't need to train the users and create and give more Employee Awareness as the E&Y article indicates.  We need to teach 'Real Security' to the in the trenches blue team defenders that are employed at many, if not most companies.  We need to teach them how to actually detect and respond to any size Cyber event and do so at the speed of business so that they may move on and you can get back to defending your network.  And the policy statement... Really?  Did E&Y not read that employees will disregard company policies where BYOD was involved?  We already know they surf non-business related sites on work systems because they can.  What makes anyone think policies will prevent anything?  They are guides on how to do things, or how a person will be reprimanded if caught.  Policies are regularly broken and the Internet has become an entitlement to most employees these days... Take it away and see what happens, I dare you!

This is why I do presentations on malware, logging and I challenge people at talks, to inject some thinking, to get people thinking, 'Is there another way?'  Thawt Leadership I think it' scaled ;-)  I share what I know about logging and malware at local ISSA 1/2 day and all day events, I do presentations at many Cons all to educate and share the love and a new way of thinking.  

Most people I talk with do not have the basic Windows auditing tweaked to actually record the events needed to detect a Cyber Attack of any kind.  If they do, they have not refined their audit rules and are not alerting via email to real actionable events.  They also do not monitor well known locations for malware or suspicious changes to a Windows system and sending that to the logs either.  Example;  How many of you have enabled the Advanced Auditing Security 'create files' property for one or more Windows directories (Windows, System32, Drivers, WBEM) to detect if a new file, not replaced files by Windows Update, but new files like malware are recorded and sent to you via email by your logging solution?  Implement and refine this feature alone and you are well underway to detecting a small to large Cyber Attack!  Don't leave out actually enabling the Windows Audit Policy as it (Yay Microsoft) is off by default and record success of privileged items and others of course.

Logging is HUGE to being prepared for a Cyber event of any size.  It can detect behavior of a Malwarian or Bad Actor reaching beyond a compromised system.  It can also allow defenders to report on who did what, where and when, but not why unless you ask them.  If you also monitor key locations across your Windows systems for file additions or changes you can detect odd files, which if happening from one system to many is also suspicious and can be alerted via email if you have a solution that can do this like BigFix, Tanium or others.

We also have to give up on spending tonnage of $$$$ on protecting the endpoint.  It WILL get popped if you allow users to surf the InterWebbings without strict controls.  Bad sites serving up malware are all over and the majority are on legitimate websites.  No, FireEye will not prevent all this threat, what about Thumb Drives?  Or users on their company laptops surfing outside the company when not protected by your proxy solution like FireEye?  The endpoint WILL get popped and InfoSec really needs to move more towards Detect and Respond to this threat in their budgets and focus less on prevention to move forward.  Start thinking like hackers and be a detective, not a preventive InfoSec program as it will serve you well and prepare you for any size Cyber event.

So I leave you with this to consider...

1.  What is 'Real Securiy' to you?
2.  Do you have a robust logging solution in place?
3.  Do you alert to the items I stated above?
4.  Have you attended a local BSides event to interact with the people in the know?
5.  Do you believe you have the people that can learn these tricks and skillz?

Or do you just believe compliance will get us there?

Let me know your thawts at the next Con.

#InfoSec #Logging #Malware

Monday, July 8, 2013

(I) Cyber-Ark Threat survey says 51% of companies think they are currently compromised

Here is another report by a security tools company that has some interesting data. 'Cyber-Ark's Global Threat Landscape Survey - June 2013'.

51% of companies think they have or had an active compromise going on.. Hmmm

Later in the report it states a rather high number of companies can detect 'attacks' in minutes or hours. An important distinction here is an attack is NOT a compromise. The question should have been "How long would it take you to detect a compromise?"

There is a significant difference between 'attack detection' and 'compromise detection'. Your goal should be minutes and hours to detect a compromise as detecting attacks is almost worthless with the sheer quantity of noise we all receive from the Internet. The recent Verizon DBIR and Trustwave reports clearly show an average of 210 days to detect a compromise and the notification of compromise usually comes from outside the company! In addition less than 5% of companies could detect a compromise in hours or days. These reports are believable, not sure Cyber-Ark asked the right questions.

Companies that create these reports need to ask the right questions to help those that participate get real actionable information. Not 'Wooo Hooo, I can detect an attack fast', when in fact clearly they can not detect the more important compromise.

The fact that 51% indicated they are or have been compromised again points towards Detect and Respond is where your InfoSec efforts should focus, NOT prevention as clearly prevention techniques of buying security tools is NOT enough.

Cyber-Ark Advanced Threat Survey


Thursday, June 6, 2013

(I) Calling for a "Malware Reporting Standard"

So what is a "Malware Reporting Standard"?  In short, a consistent way to report data and information about malware, usable by everyone, for use by any tool.  The bits of malware information that all of us Information Technology and Infosec professionals need to enter or import into our myriad of security solutions or scripts that we may use.

Why is this an issue?  Have you ever read virus descriptions from Sophos, Seculist/Kasperski, McAfee and others and tried to glean some data to enter into a security tool or search script?  Have you read the Mandiant APT1 report with the IOCs (MD5s) listed in the Appendix document, the Kasperski Red October report with the MD5s listed, or even the Kasperski WinNTI report which lacked any Indicators Of Compromise (IOC)?

So what do we need?  What we need is for all the vendors and malware researchers to provide and report data about malware in a concise format that makes it easy to identify and consume the valuable details littered throughout the above mentioned reports and virus descriptions.

For many security solutions, the following information is needed to create an analysis we use to detect any anomalies;

  • Filename
  • Path found
  • File extension
  • MD5/SHA1
  • Any Digital Signature info
  • Dates
  • Registry entries for Windozs systems 
A security professional does not necessarily need all the details to perform or setup an analysis.  The path or location the file was found and the extension of the file is fantastic information to setup an analysis for anything odd.  Look for anything in location \XYZ with the extension of .ABC that was found in the last 24 hours for example, and give me the SHA1 or MD5 of the file and maybe compare it to a file with the MD5 or SHA1 of the known bad IOCs (if necessary).  This is a better method as now automated scans can look for a few files and compare it to an IOC list (if necessary) versus checking every file on the system against the IOC MD5 list, which is growing daily and will soon, if not already be out of control and unusable.

I am well aware that there is unique malware tracked by Anti-Virus companies (no alert  triggered to the user)  that are purposefully kept secret.  Whatever reason they are not alerted to the end-user, legal, law enforcement and specific requests by customers not wanting any details of an identified malware released while investigations are ongoing.  We do not get to see these details that can help protect us, we are denied!  There is no reason AV companies cannot add certain minimum bits and still keep details secret.  AV companies can just add the following as a minimum to a monthly or quarterly report/list for the secret Shhh dont alert malware items;

  • Location/path malware was found
  • File extension of the malware
  • Anything else in the Malware Reporting Standard
As a part of implementing a “Malware Management Framework” (we recommend everyone start adopting), the review of malware reports, virus descriptions and any malware analysis details is a fundamental part of the “Malware ManagementFramework” process.  Look for the items mentioned above within the last 24 hours using a tool like IBM Endpoint Manager (formerly BigFix) or Tanium would allow you to detect even the “Shhh dont alert malware items” as suspicious or unknown files and to investigate.  If you used the "MalwareManagement Framework" approach, malware that dropped additional .OCX files in System32 would be obvious as there are only 5 or 6 .OCX files normally in C:\Windows\System32 as was seen in Gauss.  All you would need to know is to setup your analysis tools and/or scripts to look for this condition in the last 24 hours and alert you for example.

Far too often IOC's provided by many of the sources mentioned above, only provide information based on what is fed into their respective tool(s).  Mandiant's APT1 report with MD5 hashes to be fed into MIR, just as the JIB reports that Homeland Security/FBI/InfraGard provide.  I cant use this data in my malware detection tools.  Any and all malware researchers and vendors need to provide the industry more information!  I dont feel it is necessary to scan every file on a system to match how many MD5s?  It is not practical with 110 million pieces of new malware detected in 2012 and growing!  This is no longer a practical approach.  60 million already by May 2013, more than all of 2011!

The "Collective Intelligent Framework" (CIF) project is a step in the right direction, but we need to take the feeds and schema used by CIF, OpenIOC and others and standardize it.  A researcher or AV company can collect all or even parts of the bits of malware, but it must report it in a standardized manner.  Also provide the data in two formats, not just the XML to be consumed or imported by a tool, but also in CSV format like CIF supports.  Maybe each vendor can provide the same info in readable reports like we see with AV descriptions as well for easy consumption.  For AV companies, please add the data in your descriptions that match the "Malware Reporting Standard" in a consistent and obvious way to make it easy for us to consume and use. 

Aren’t we just letting the malefactr's, ne`er-do-wellers and malwarians know where and what we are looking for?  Absolutely!  If we can squeeze them into a smaller and smaller target, we win, they lose.  Malware authors will have to spend more time on their warez to avoid detection. This is a good thing if we can change their behavior and reduce hiding spots.

Something needs to change, something MUST change if we are to get ahead of malware and improve our investigation, detection and response capabilities.  Support the CIF schema as the start of a Malware Reporting Standard!

Read the CIF feed config and schema here:

We will support reporting any malware bits in the format discussed, so should you!