Articles & Presentations

Sunday, December 14, 2014

(I) Regin Malware sophisticated? Maybe in function, NOT discovery




Everything you read about the Regin (Reg In or Reagan) malware indicates that it is one of the most "sophisticated" malware since StuxNet.


Reading this irks me since many in our industry read "sophisticated malware" and they think "I can't detect this stuff". Actually you can! You just have to look and applying The Malware Management Framework could have discovered.

Before taking a look at how to discover Regin, what is sophistication as far as malware needs to be discussed as the Information Security industry really needs to change how It discusses malware. Malware should be broken into two components as far as sophistication:

1. Function
2. Artifacts

Function of the malware, the application portion is different than the artifacts or indicators that exist on the system. Sophisticated function does NOT make malware too sophisticated to discover.

Discovery and Function are not synonymous when it comes to malware. As a Blue Team Defender and IR person, the first order of business is to discover malware. What the malware does application wise is secondary and part of the malware analysis stage that occurs after malware is discovered.

Malware is made up of files and processes. If malware exists on disk which 95% does, it can be discovered. Now that function and artifacts have been discussed and split as far as sophistication, let's take a look at discovering Regin as it is NOT that sophisticated that we could not discover it.

Two clever functions Regin used are to use NTFS extended attributes to hide code that pointed to the Windows \fonts and \cursors directories. But the main payload was stored on disk mimicking a valid file, not hidden in any way. Regin has pieces littered in many directories, but the main payload is fairly obvious if you watch for a few indicators.

The second clever thing that Regin did is to store some of the payload in the space between the end of the partition and the end of the disk as most drives have empty blocks that can hold a small partition, enough for malware to be stored. You would have to use disk utilities to find an odd partition area.

What we are after in malware detection is some indicator that we can focus on that indicates a system is infected. The goal is to find one or more artifacts, not necessarily every artifact since once you find an artifact it will lead you to more details once you begin detailed analysis. The goal of malware discovery is to discover something so that you can take some action. You can do nothing, not advised, you can re-image a system, or find enough of the malware, what we call the head of the snake, and cut off the malware's head to clean the system enough to move on. A lot of organizations do not have the resources to do full analysis up front and business and management just wants you to clean it up and move on in many cases.

Regin drops the initial malware Stage 1 & 2 into the following directories:

• %Windir%
• %Windir%\fonts
• %Windir%\cursors (possibly only in version 2.0)

The main Stage 1 payload is a driver (.SYS):

• usbclass.sys (version 1.0)
• adpu160.sys (version 2.0)

This driver is the thing you should be able to discover. According to the reports, the only malware component that is visible. This driver is added to the Windows main directory with the other components using the first clever feature by storing the other parts of the malware into the NTFS Extended Attributes in other directories. You discover this driver and you found the head of the snake!

The malware can also store some of the code in the Registry. Regin used two keys for this purpose:

• HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\
• HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\RestoreList\VideoBase (possibly only in version 2.0)

If you use Auditing like I recommend for many directories and registry keys, you could have detected the additions to the Class Key as this location does not change much.

Since malware can be stored just about anywhere in the registry that the active user can access, discovery of this part of the malware is challenging. You are usually led to additional artifacts when you analyze the malware and find a string pointing you to a registry key, but auditing keys can help you discover malicious behavior, so consider using auditing on main keys that you find in your Malware Management reviews of reports such and Regin, Cleaver and many others to help build your detection abilities. Be sure to test the audit settings as some keys generate crazy noise and you will want to disable auditing in these locations.

After the initial Stage 1 driver loads, the encrypted virtual file system is added at the end of the partition instead of using the standard disk Stage 3 is stored in typical directories:

• %Windir%\system32 • %Windir%\system32\drivers

• HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\

This portion should be encrypted and harder to detect, but there is the registry key! So auditing is your friend here.

Hopefully this shows how even the 'most sophisticated malware' could and can be discovered if you practice the Malware Management Framework. The head of the snake was visible and detectable with Regin as well as registry keys used. Read the reports, pull out the indicators, tweak your tools or scripts and begin WINNING!

Read more on the Regin malware and use Malware Management to improve your malware discovery Skillz and Information Security Posture.

http://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/regin-analysis.pdf

https://firstlook.org/theintercept/2014/11/24/secret-regin-malware-belgacom-nsa-gchq/

https://securelist.com/files/2014/11/Kaspersky_Lab_whitepaper_Regin_platform_eng.pdf

https://www.f-secure.com/weblog/archives/00002766.html

#InfoSec #HackerHurricane

Tuesday, November 11, 2014

(I) Powershell logging, what everyone needs to know about it





Powershell is, well powerful. Microsoft is making Powershell the future of Windows Management starting with Windows Server 2012. For us security peeps, Powershell is capable of doing a LOT of security tasks, like Ben Ten, @Ben0xA showed at his Powershell workshop at BSidesDFW. With all the capabilities Powershell has leveraging .NET, you and the hackers can do just about anything!

So how do we monitor and defend against such a powerful hacker tool? MetaSploit and the Social Engineering Toolkit uses Powershell as does PowerCat, a Powershell netcat tool. Powershell is used since commands can be executed and no files are dropped on disk, unless you want to, making it VERY hard to detect... Or is it hard to detect?

You log it, that's how. But alas, Microsoft does not enable what you need by default so you must do some work to get the commands properly logged. If Powershell is executed you will see EventID 4688, but that only tells you Powershell executed, not what was executed. We security peeps need more details, but first execute a few Powershell commands and look at the logs. see any command lines???? Nope...

If you want to improve your security posture and protect against Powershell misuse and abuse, and just plain know what the heck is going on with Powershell in your environment, give this a test drive.

1. Create an All User profile (profile.ps1) so you can enable some global variables
2. Enable two global variables so Powershell will log what is entered at the Command Line
3. Increase the size of the two PowerShell Logs
4. Gather the two PowerShell logs
5. Enable file auditing to profile.ps1 so you know if anyone changes the default profile! Most important in case the bad guys look for, modify and/or delete it.

#1. Create profile.ps1 in \Windows\System32\WindowsPowerShell\v1.0. Add something to the beginning of the profile so you know the profile is working, for example; change the drive letter to D:, assuming you have one, or echo a Reg Rey or write a Reg Key entry (great to log for later).

#2. Add the following two variables to your profile.ps1
a) $LogCommandHealthEvent = $true
b) $LogCommandLifecycleEvent = $true

Sample profile.ps1
----------------------
CD D:\
$LogCommandHealthEvent = $true
$LogCommandLifecycleEvent = $true


WARNING: If you use usernames and passwords in Powershell scripts, this will show them in the logs. Of course you know to rotate these Creds frequently


#3. Increase the size of the Powershell logs so they collect a LOT of stuff. Do this by opening Event Viewer, navigate to "Applications and Services Logs" right-click and selecting properties of the logs "Windows Powershell" and "Microsoft\Windows\Powershell\Operational" logs, increase their sizes and be generous.

#4. Gather the Powershell logs however you can, through wevtutil, Powershell, or a log agent like Splunk. Below I have included a Splunk query for harvesting Powershell commands being executed, enjoy.

#5. Enable File Auditing on the profile.ps1 file for any changes or deletion so you can alert on this condition in case the bad guys modify and/or delete your default profile. Do this by right clicking on the profile.ps1 file in Explorer, selecting Security, Advanced, Auditing, Continue, Add, type Everyone, Check Names, OK and then select the following properties:
a) Create Files
b) Write Attributes
c) Delete
d) Change permissions
e) Take ownership

Now you will audit any changes, ownership and deletions to the profile.ps1 file. Look in Event Viewer for EventID 4663 Accesses = DELETE and Accesses = WriteData. Test it!



And there you have it, go ahead and test your new settings and execute some Powershell commands and see how it logs unlike before so you can catch your Pentesters, hackers or nefarious admins using Powershell for more than just administration.

For you Splunkers, here is a query I threw together to monitor what Powershell Command Line parameters are being used. Filter out the good to look for the bad.

1. Add Powershell to your inputs.conf (tweak as needed)

[WinEventLog:Microsoft-Windows-PowerShell/Operational]
disabled = false

[WinEventLog:Windows PowerShell]
disabled = 0
start_from = oldest
current_only = 0
checkpointInterval = 5


2. Create a query like the following:

index=* LogName="*Powershell*" | eval Message=split(Message,". ") | eval Message=mvindex(Message,0) | eval MessageA=split(_raw,"Details:") | eval Message1=mvindex(MessageA,1) | eval Message1 = replace (Message1,"[\n\r]","!!") | eval MessageC=split(Message1,"!!") | eval Message2=mvindex(MessageC,2) | eval Message3=mvindex(MessageC,3) | eval Message4=mvindex(MessageC,4) | eval Message4=split(Message4,"=") | eval PS_Ver=mvindex(Message4,1) | eval Message5=mvindex(MessageC,5) | eval Message6=mvindex(MessageC,6) | eval Message6=split(Message6,"=") | eval Engine_Ver=mvindex(Message6,1) | eval Message7=mvindex(MessageC,7) | eval Message8=mvindex(MessageC,8) | eval Message8=split(Message8,"=") | eval PLine_ID=mvindex(Message8,1) | eval Message9=mvindex(MessageC,9) | eval Message9=split(Message9,"=") | eval Command_Name=mvindex(Message9,1) | eval Message10=mvindex(MessageC,10) | eval Message10=split(Message10,"=") | eval Command_Type=mvindex(Message10,1) | eval Message11=mvindex(MessageC,11) | eval Message11=split(Message11,"=") | eval Script_Name=mvindex(Message11,1) | eval Message12=mvindex(MessageC,12) | eval Message12=split(Message12,"=") | eval Command_Path=mvindex(Message12,1) | eval Message13=mvindex(MessageC,13) | eval Message13=split(Message13,"=") | eval Command_Line=mvindex(Message13,1) | table _time, host, ComputerName, TaskCategory, EventCode, Message, PS_Ver, Engine_Ver, PLine_ID, Command_Name, Command_Type, Script_Name, Command_Path, Command_Line




I split my raw logs versus using RegEx as they are more readable in the future and by others. Powershell logs use a strange end of line carriage return, so another Splunk Ninja Tip on parsing Splunk Logs.

Use SplunkStorm for FREE and test it out!!

Happy PowerLogging!!!

#InfoSec #HackerHurricanel


Monday, November 10, 2014

(I) Malware Management takes care of variants like Backoff.C!tr.spy




We all knew variants of BackOff would occur and infections spreading to other retailers and PoS machines.

By practicing the process of Malware Management you can keep up with variants of malware as they are discovered and reports or Blogs written. Then you can tweak your scripts and tools to detect the variants.


As we expected the Malware would move from mimicking Adobe and Java to being random named directories and random named files. Remember, Malware Management is about the known and unknown, so locations used are key to look for new things of any names and filter out the knowns.

The new variant of BackOff uses the Run key gain persistence and launch and uses ports 443 to talk to various domains, guaranteed to change, but notice they are .ru for you network NetFlow geeks.


If you are practicing Malware Management, using "Sha1Deep64 -r", a script, tool, file auditing in logs or my favorite defensive tool BigFix, you can look for any new directories and files being dropped in this location, common for all the PoS malware discovered thus far and detect the new variant fast!

Fortinet Blog on BackOff variant

Fortinet details on BackOff variant

#InfoSec #HackerHurricane

Wednesday, November 5, 2014

(I) BlackEnergy - you guessed it, more Malware Management goodness trying to act like Adobe




When is something that looks like Adobe, not Adobe? When it's Malware of course.

Like the BackOff malware BlackEnergy also tries to fool the user or admins and hide as an Adobe application. See a pattern yet?

BlackEnergy uses similar user based areas to store the malware, but also uses the Windows structure as well!

BlackEnergy is a great example of the Malware Management Framework working yet again. There is something interesting about the malware that I may not have covered before in regards to the Windows directories that is important.

First off, yet again AppData is getting used for file drops. Unlike BackOff which used AppData\Roaming (%AppData%), BlackEnergy uses the %LocalAppData% variable which points to the users AppData\Local directory and in this case creates a directory called "Adobe" to drop the three .DAT and one .SOL files. Even more interesting, it drops five additional .DAT files into \Windows\System32\Drivers!

Why is this interesting for Malware Management? The Windows \Drivers directory contains mostly .SYS files and only 3 other odd file types. So if you were looking at five new .DAT files in your script/tool output, this should strike you as odd and warrant an investigation. On top of the fact the malware dropped similar .DAT files in \AppData\Local\Adobe.

BlackEnergy used the AutoRuns Startup folder to launch the malware, a typical place to look if you are an a incident Responder or Blue Team defender. The important take-away for BlackEnergy is to focus on NEW file types that are not like what is already there, in this case .SYS files.

You should then automatically look to see if any of these .DAT files are executables (Starting with MZ) and thus realize instantly they are bad if they are executables. Using 'Sigcheck -e' is great way to check to see if a file is an executable as many malware files are called by the launcher and named just about anything, but are indeed executables, which is bad.

So yet another example of Malware Management being applied to improve your security posture.


SecureList Research on BlackEnergy

#InfoSec #HackerHurricane

Wednesday, October 29, 2014

(I) Another fine example of Malware Management working - Listen up Banking folks it's Dyre!




Thanks again to US-Cert for producing a perfect example of The Malware Reporting Standard output!

Another teaching moment to demonstrate how financial organizations can use Malware Management to check their systems to the possible exposure of a Phishing campaign targeting Banking folks with the Dyre Malware.

US-Cert points out exactly what us defenders, incident responders, and yes, the IT public needs to know about the Dyre malware. Read this notice! What do you see are the take aways?


1. Affects Windows based systems




2. Dyre is using the Windows directory for the file drop, Not the AppData user structure, but worse if it gains a foothold. In this case dropping an executable in \Windows is odd... and random named too. How many programs do you know that use a random funky name.exe like this?

Ninja Tip - There are only a few executables normally found in \Windows... Explorer, HelpPane, HH, notepad, regedit, splwow64, sttray64, twunk_16, twunk_32, winhlp32, write.exe and maybe some AV client files. There are only two twain Dll's as well, and maybe a support files for your AV or other agents. Any additional .EXE or .DLL found here has a high probability of being malware!

File Auditing enabled on this directory for Create files and Create folders will allow you to look for EventID 4663 to detect a NEW file drop!


3. Dyre installs a service called "Google Update Service". Clever little malwarians... Googles Update Service is actually called "gpudate and gpudatem" with a generic description of 'Google Update Service'. Sneaky, but easy to spot.

4. Remember your 'Windows Logging Cheat Sheet' and look for EventID 7045 for a New Service Installed and you catch service based malware like Dyre!

5. The keys are in the Services Key in the registry. You can manually look for them as enabling auditing under this key requires setting it for all subkeys and then you filtering out of your logs all the noisy keys to be effective, not hard, but it takes a little time.

There you have it, another educational moment on how to detect malware

#InfoSec #HackerHurricane

Wednesday, October 8, 2014

(I) Further proof the Malware Management Framework WORKS! The Tyupkin ATM Malware




Practicing Malware Management as a part of any 'good' Information Security program, you would have caught the Tyupkin malware if you managed ATM's!

The locations this malware used are known places to monitor in a Malware Management program. The following is right from the Securelist report:

1. Drops payload in \Windows\System32 (Auditing enabled, Event Code 4663)
2. Shortcut added to %AllUsersProfile%\Start Menu\Programs\Startup (ProgramData) (Auditing enabled, EventID 4663)
3. Uses the Run Key to persist (Auditing enabled, EventID 4657)
4. For net flow folks, connections on Sunday and Monday night

You can use CMD scripts, PowerShell, Python or fancy InfoSec tools to look at these locations manually; but this malware is nothing new and far from sophisticated to detect. Enable some Windows Auditing on key locations and your Security Tools and Event Logs will capture the data and alert you!

Works for Linux too, just take a look at the Mayhem Malware Analysis from VirusTotal:

Mayhem – a hidden threat for *nix web servers

Want to know more about Malware Management and actionable Detection techniques? Come see my talks at HouSecCon Thurs Oct 16th and BSidesHouston Sat Oct 18th.










Tyupkin: Manipulating ATM Machines with Malware - Securelist

#InfoSec #HackerHurricane

Monday, September 22, 2014

My interview on the Security Weekly Podcast - Episode 388




You can find all the resources discussed in the interview for Episode 388 here:

#InfoSec #HackerHurricane

Sunday, September 21, 2014

Challenging ALL InfoSec people, malware discovery is not as hard as you think


After being on the Security Weekly podcast and to address a response to an email on my previous blog I decided to post this challenge to all IT and InfoSec people to get them more familiar with how easy, or hard Malware Discovery can be.

I CHALLENGE YOU TO PROVE WHAT I PREACH TO YOURSELF!

To be any good at Malware Discovery or monitoring you MUST practice the Malware Management Framework which you can find here.

I used the Home Depot / Target BlackPoS malware and their variants as an example and after some comments on my blog entry I decided to challenge the readers (YOU) to actually DO the following so you can see and experience how Malware Management works and how a little understanding of Windows goes a long way for Malware Discovery.  I don't know, or it's "too hard" is not good enough, so step up and take the challenge!

I stated to look for NEW directories and files in key directories. In the case of BlackPoS and the variants the malwarians picked on %AppData% which is \Users\<username>\AppData\Roaming along with other directories.  CryptoLocker dropped its payload in the root of %AppData%, talk about lazy and the easiest to find.

In the PoS malware they dropped files and created a NEW directory attempting to hide as Java, or Adobe Flash Player. If you practiced Malware Management and understand where Windows installs user components of applications like Java and Flash, you would know that Adobe installs their products to %AppData%\Adobe and a sub-directory of \Flash Player. For Java, user components are installed in AppData\LocalLow\Oracle or AppData\LocalLow\Sun, NOT the \Roaming directory.

The fact files were dropped in the root and two sub-directories with executables in them created in %AppData%\AdobeFlashPlayer or %AppData%\OracleJava should be a Red Flag in the worst way. Also, any NEW executable files in the core Windows directories of \Windows, Windows\System32, or \Windows\System32\WBEM directories should also send IT and InfoSec peeps into research Incident Response mode. Not to mention a NEW service being installed from \Users anything is also a monumental red flag that should be investigated.

So how hard is it to monitor for malware in these directories? Again, understand we are focusing on NEW files, not changed/modified files. Patching a system modifies files, generally speaking. Installing an application, adding a role to a server or dropping malware will result in NEW files which is low noise. Change Management or Microsoft Patch Tuesday should alert you to the change coming or your own personal system will be patched, updated or a new application added that you can use for reference to compare to an alert.

If you are using products that can monitor these directories, and remember I am suggesting this is a starting point to monitor these directories based on the complete failure of retailer security teams to practice Malware Management. If you do practice Malware Management, which takes roughly 1 hour per week, reviews of malware reports will show you other directory, keys and services to monitor for executables and scripts. Executables are files starting with 'MZ' and of course look for scripts too, .CMD, .BAT, PS1, etc. You can always expand what you are looking for, but start with executable type files.

If you use products like Carbon Black, Tanium, BigFix, TripWire, etc. you will have to spend some effort setting up what to monitor. In Carbon Black's case you see everything by default (way to noisy to catch anything) and you will need to create LOTS of Watch Lists that would have 'AppData' in the path with file types you want to watch. Watch Lists have to be whittled down to exclude known executables or paths (dangerous to do it soley by path). This will take some effort as there are 25 dirs in AppData, 14 dirs in Roaming and 5 in LocalLow on one of my systems and will vary by the function of the system and what users are allowed to install.  If you are a wide open administrator allowed environment, or allow a lot of open source applications, this will be tougher for user based systems. Servers and PoS systems will, or better be very static and much easier to setup and monitor, but still will take some effort.  Plan some significant effort to tweak tools like Carbon Black to provide you actionable results that you can alert on.  Tweak results to exclude known good and eventually you will get there. It is difficult to filter out the good from Carbon Black and users will find it a challenge to initially setup.  Like any defensive monitoring tool, test, test and test again by placing files that fail your monitoring to trigger the alert. Carbon Black is also not real time, as in TripWire, it will take 10-15 mins for results to show up in the console.  Don't forget you are looking for signed and unsigned files, don't just look for unsigned. The malwarians know how to sign malware!

Now for the HackerHurricane Challenge! Focusing on your personal or work system!

First, clean up or delete any installer .EXE's from your desktop or downloads directory for your user. You may have an admin account that setup your system so clean up any installers they created as well. No sense seeing those in the results unless you want to.

Second, open a command window and navigate to \Users. Type the following command:


  • Dir /a /s *.exe *.dll > All_Executables.txt
You can do this to just your user or all users, it's up to you.

Third, run the same command changing the output filename, every day, couple days, once a week or once a month and compare the first scan to the current scan using some comparing tool like Notepad++.

What do you see?

Forth, start practicing Malware Management and look in locations the analysis, reports or descriptions state malware is found. I am convinced you will find the amount of NEW files that you don't know about is surprising small, if any. Imagine now a server or PoS that does not have a user doing any installs or updates. Keep the in mind the Target and Home Depot PoS systems had not been patching their XP based PoS'.

You can also use Sha1Deep, Sha256Deep or any other utility you want that has a built in compare option to speed up the comparisons.

  • Sha1Deep64 -r -z -o e * > Master_List.txt
  • To monitor for changes:
  • Sha1Deep64 -r -z -o e -x Master_List.txt >Changes.txt
After you do this challenge over a month or longer, I think you will find that Target and Home Depot completely failed to detect easily detectable malware. You can schedule this to run hourly, daily, monthly, use one of many tools or a fancy solution like Carbon Black, TripWire, Tanium or BigFix to name a few to automate this type of monitoring.

You all have been challenged, send me your comments.

* Get the Malware Management Framework HERE

#InfoSec #HackerHurricane

Wednesday, September 10, 2014

Malware Management Framework - Home Depot would have caught their breach.. easy



Here we go again with the 'Breach of the Week'... This time it is potentially larger than Target, which is the largest known retail breach to date with over 110 million affected accounts. It just happened a year ago ;-/.


So what does the Information Technology and Information Security community do about all these breaches?

We are at a crossroads in Information Security as an industry. Products are not helping you and clearly not a magic bullet and compliance efforts like PCI obviously are not saving you either.


What we have here is a people problem. The problem is simply the people and the lack of understanding how to do Information Security engineering. Some are better than others, but clearly management does not make security engineering a priority. It's a lack of leadership, vision and the obvious problem facing our industry with all the breaches in the past 12 months. Use the information already available and look for it in your environment. Why is this so hard?
Say hello to "The Malware Management Framework". It should be every CIO, ISO, CSO, CISO and InfoSec managers new mission statement.

Case in point. The Target breach was analyzed and a report published by iSight Partners with all the details needed to detect BlackPoS/Kaptoxa for any retail business, it was released... Jan 14, 2014. That was EIGHT months ago! So any retail business with IT and InfoSec people and good leadership surely would take this information and make it a priority to check their PoS systems for any sign of PoS P0wnage. Sadly, not.


Enter Home Depot, now a confirmed breach, the size and impact yet to be understood and by the very same malware that hit Target last year. Everything Home Depot needed was at their fingertips, published, available to their leadership to step up and avoid the cluster F$@! that hit Target... As BSides Austin shirts stated on the back, "Don't be a Target"!


So where did Home Depot, the 2nd largest retailer in the US go wrong? Where did every retailer with PoS systems large enough to have an IT and InfoSec staff go wrong? Where do most companies go wrong?


They were not practicing Malware Management. We are all familiar with Vulnerability Management, it's in compliance requirements, but Malware Management? Yes, it's basically the same thing, look at alerts, descriptions, reports and analysis from the vendors, researchers Blogs and Conferences on malware instead of vulnerabilities. The information you can glean is golden and will save your ASSets.

Below I have included many well known APT, PoS and even a NIX and Mac malware analysis that EVERY single InfoSec and IT team should use to start their very own Malware Management Program. Using the "Malware Management Framework" will help you to discover malware, or validate you're malware free. If you do find something new and unknown, like we did with the WinNTI malware in June 2012, 10 months prior to the Kaspersky report was published, you should publish, release or Blog your details about your malware artifacts that will benefit everyone.

Case Study:

The malware associated with a breach where I previously worked is proof Malware Management works. We used data from published malware reports and analysis to tweak our tools. Initially using a script tweaked to look for the malware artifacts in locations APT was know to reside, we found the malware artifacts. This allowed us to catch the intruder in the act and block them from getting any information. Looking at the logs for the affected systems we were able to see some items were not enabled or properly configured and then able to tweak our logging to better detect Malwarian behavior. This led us to produce the "Windows Logging Cheat Sheet" (link below), a direct result of our Malware Management efforts. We have made this information available to everyone for FREE so people may improve their People and Process components of their Information Security programs. It works, you just need management to allocate a few resources to actually 'Fix It' as a Wendy Nather's indicated we need more of in her recent Blog (Idoneous Security) and she is dead right.


The "Malware Reporting Standard" that we propose everyone adopt will become obvious as you read through the various vendor malware analysis reports. What lacks in these reports is a consistent way to report malware artifacts, also called Indicators of Compromise (IOC). If creators of malware analysis reports used the Malware Reporting Standard, we would have more consistent reports and easier time consuming and using the valuable artifacts. The best examples of this format is SecurelList Virus Description details and oddly, US-CERT BackOff malware alert:

If The Home Depot leadership had any clue how to do effective Information Security, they would have avoided this breach. Clearly Home Depot is lacking at security engineering and not using the information from the recent breaches to ask themselves "Are we a Target?, should we prepare ourselves for a similar incident?". They could have avoided what could be over (just guessing) 200 million lost accounts, only time will tell. But, we do know this is going to cost them a small fortune, MUCH more than improving their security engineering would have cost. Target has spent $146 million USD to date on their breach!


With the Malware Management Framework we found a previously unknown APT 10 months prior to being published by Kaspersky and Home Depot could have 100% avoided their breach using this same methodology. Only 6 log events were needed to catch the Target and Home Depot breaches from a Windows log events! If only they were any good at security engineering and practicing Malware Management could this have been avoided.


This is a teaching moment for Information Security professionals worldwide, not just those that are currently under attack with Point of Sale systems and credit Cards, but for whatever is next, whatever is already there in YOUR environment!


Start practicing Malware Management before it is too late and you are the next Target or Home Depot.


#InfoSec #HackerHurricane #MalwareManagement #Breach

RESOURCES:

Below are some items to help you start using the Malware Management Framework.


Thursday, September 4, 2014

InfoSec Industry partly responsible for huge breaches, InfoSec Cons another part, you and management the last part







You know what is worse than all these HUGE Credit Card breaches? The fact that it is ridiculously easy to detect! All BackOff malware uses the %AppData% variable directory. Translated - C:\Users\\AppData\Roaming. #InfoSecFailure

If you are not watching for NEW directories being created and/or executables being dropped here (like CryptoLocker did), then your doing SecOps all wrong. This is one of the Top 5 locations you should monitor for malware. Enabling auditing on this directory alone would have caught these HUGE breaches.

These failures in SecOps are partly our own InfoSec industries fault. InfoSec Cons do not prioritize talks for defense which 90% of us attending do, nor demand defensive content, nor set aside 10-25% for defensive talks to teach our industry stuff like I stated above. #InfoSecConFailure

Please ask and demand that your InfoSec Cons set aside more time to REAL Defense in their talk selection and promote the Con is and wants defensive talks to help our own industry catch up. Because we are WAY behind and falling further back each breach that happens.

The image below is how I felt at Vegas Con week... All focused at breaking the gate, not defending the obvious gaps.






Dan Geer at the BlackHat Keynote was dead on when he said if InfoSec were a soccer game it would be 464 to 452, all offense, no defense.

Help our industry by not attending exploitation, hackery and offensive talks if your job is to defend your company. We need to evolve quickly before it is too late for us and our companies.

And yes, I have submitted 2 talks to multiple Cons on the subject of logs and malware and turned down. There were 3 Logging vendors at BlackHat, ZERO talks on logging for APT/PoS type attacks... Sad... VERY sad.

Under the current black cloud we are all under, you would think Cons would be begging for defensive content.

Ohh... And the other 4 of the Top 5 locations to monitor...

2. C:\ProgramData
3. C:\Windows - Explorer injection
4. C:\Windows\System32 - NEW dropped files
5. C:\Windows\System32\WBEM - #1 on my list

If you practiced The Malware Management Framework then this information would be obvious. Read the Kaspersky report on BackOff that proves my point.

And yes, I monitor all the above and MUCH MUCH more and share in the presentations I give.

Kaspersky/Securelist Article on BackOff malware

#InfoSec #HackerHurricane #EducationOpportunity

Thursday, July 31, 2014

The latest BackOff PoS malware is nothing new if you practice the Malware Management Framework




Proof if you were practicing the "Malware Management Framework" and reading the analysis from BlackPoS that took out Target and Neiman Marcus, you would have been prepared for BackOff that hit 600 retailers. Yes, 600 retailers!

SC Magazine article on 600 retailers affected by "BackOff" malware.

As Trustwave accurately concluded, this malware is not sophisticated. It is noisy and would have set off many of the alarms I have previously covered HERE and HERE and HERE.

Behavior of most malware share many common triggers on Windows based systems, NIX too. So why isn't anyone detecting this stuff? Small shops have no staff, PoS vendors don't use their own stuff or work with larger users to monitor for malware funkiness they can pass on. Large companies don't know how, or as I often state, "They are chasing and filing out compliance paperwork and not doing enough security engineering and defense and detective security, focused at looking for, tweaking and READING actionable alerts.

So if you read other malware analysis, because you're practicing the "Malware Management Framework" you have have set the following items and detected the malware.

1. %AppData% - C:\Users\\AppData\Roaming. If you enabled auditing of created files in this directory (Local & LocalLow too), you would have caught the malware (nsskrnl.exe & winserv.exe). Files just don't get created here normally.

2. %AppData% - C:\Users\\AppData\Roaming. The directory "OracleJava" was created and "Javaw.exe" created... If you know what is normally installed, you would find this was NOT normal for your systems, let alone a static PoS system or server. This is often a trick to create a directory or two deep to obfuscate the malware to look normal.

3. HKCU Runkey had values of #1 & #2 above.

4. HKLM & HKCU Active Setup keys were used, a little more sneaky, but known key to monitor.

As the Trustwave analysis shows there was already a Snort Sig that matched this malware. If you had reviewed this malware and tweaked your defenses, other than the names, this malware is the same... Proof the Malware Management Framework would help you discover new and advanced malware.

The Trustwave analysis did not discuss the behavior aspects of the malware, how it got there, spread, etc., but the following Windows Event ID's being logged, harvested and alerted on would have detected this malware.

1. 4688 - New Process - the executables discussed above, and probably net.exe, CMD.exe and maybe PSExec.exe as they hopped around and spread the malware.

2. 4624 - some account logged in. What accounts did and what accounts at what times are normal?

3. 5140 - A share was accessed. They most likely connected to the C$ share.

4. 4634 - A share was disconnected. Your looking for behavior of share connections.

5. 7045 - A new service is installed. Static systems don't get new services except at patch time and new installs. Change Management anyone?

6. 4663 - File auditing must be enabled on directories you want to monitor. The new files above would show up. Yes, there are ways to write to disk without Event logs being triggered in PowerShell and .NET, but this is rare.

These 6 events would have detected this and many, if not most attacks. There are many other things to enable and monitor, but practicing the Malware Management Framework from the Target breach would have prepared you for this attack and many others.

Like I always say, "If your not logging these, you're not doing it right".

For more logging to monitor, read the definitive "Windows Logging Cheet Sheet" I put together for Windows logging here for tips on what to enable, configure, gather and harvest.

Trustwave Analysis of BackOff malware

#InfoSec #HackerHurricane #malware #LogsRock #MalwareManagementFramework

Friday, July 25, 2014

Continued - Proof our industry is broken

Continuing on the PoneMon study...

The following figure shows where the interviewees get their information.



Seriously? You ask CERT and Law Enforcement for more details? If you are relying on CERT as your #1 source, I think you're doing it wrong. CERT has it's place, I have used them for disclosure of a Card Key Exploit, but for "New" and "Emerging" threats? Really??? The government continues to rank poorly in their audits and are not nimble enough to be at all useful until those of us doing the actual work, that are found at the BOTTOM of this list, have already found, discussed, analyzed and published information of new and emerging threats... Who found out about the Target Breach or Heartbleed from the Feds emails before another source?

Asking your industry peers what they think was 2nd. Better, but why wouldn't your own security research be #1 and the places you consume data or read #2 & #3, which made the bottom of the list? Who are these people they interviewed? More on that later...

The following diagram shows the value they placed on how they keep up with the threat landscape... Ohhh Boyyyyyyy...


Casual conversation with security leaders was very important (thinking Executive round tables here), followed by research from independent third parties, then independent security blogs followed by security blogs and conferences which only rated 4.85 on a scale of 1-11, 1 being good.

Really? Data you get from conferences only gets a 40% on how they keep up on the threat landscape? What conferences are these folks attending? Well it's clear they are not attending, let's say it's due to lack of budget to fly to Vegas for the week long 3 Con overwhelm known as BSides, BlackHat and DefCon. If you use Point of Sale systems, there are more than enough talks to keep you up on PoS threats... Malware analysis talks too, just not enough malware APT discovery or proper logging talks (IMHO).

Analyst reports scored poorly coming in at 6.19, sorry Gartner and 451 Group.. They don't see your value it seems. Oddly Security Vendor security research and blogs came in even lower at 7.17 and 7.57 respectively. This is sad as some of the BEST data you can get for Malware Management is from these reports... Did I already state 40% say APT is their #1 concern? #FAIL. Reports from 2 years ago would have led you to detect the Target breach and other PoS fails. You seriously need to adopt "The Malware Management Framework" (dot Org) people!

If APT, malware, bad JuJu or whatever you want to call unwanted software being on your system is your #1 concern, vendors reports, blogs and research are KEY to tweaking your tools or Security Operations Center to be better at Detection & Response and Incident Response.

Left turn - On Security Operations Centers...

From another Ponemon report that was recently mentioned in SC Magazine, I would actually agree with this statistic and finding, assuming the SOC is doing the defensive stuff I am talking about, which they should be. I have however, seen some poorly skilled staff is SOC's I have assessed.

"A recent study from the Ponemon Institute revealed that companies investing in a comprehensive SOC saw a 20 percent better ROI on their security spend. These organizations saved on average $4 million more than their SOC-less peers."


Why? Probably because these people are focused at Detection and Response, the in the trenches people that can focus at finding bad JuJu by nefarious ne'er-do-wells, OK, Bad Actors ;-). I guess the interviewees for this Ponemon report did not read that Ponemon report...

Back to the original Ponemon report...



Roughly 48% was the average that the companies were 'disappointed' with the protection a security solution they purchased. Maybe their evaluation process, if they even have one, or their list of requirements was poor, or they didn't dedicate the resources to properly evaluate the product, if at all. I would say roughly 48% of the products I evaluate get tossed for poor performance, installation difficulty, user interface or excessive price for value, and failing to pass muster. Do a better job evaluating products and quit believing the sales person or vendor pitch as gospel, take 30, 60 or 90 days and do it right! A fool with a tool is still a fool... But it was in the Magic Quadrant... Maybe that is why Analyst reports scored poorly... They bought the MQ product and were 48% dissatisfied.

For some reason 80% felt Threat Modeling was essential and very important. Not enough info on what they think this means, so I'll pass on this one. The only point I want to make here is that was their highest score across all things surveyed, odd.

The conclusions of the report are beyond comment. Since the study was sponsored by Websense, let's just assume the conclusions leaned to their favor. I would say the conclusions from these two posts are FAR different, other than I agree, OVERHAUL your security solutions.

So who are the people interviewed?



Only 11% were non supervisors, managers, directors or vice presidents... Really? People like us that protect you only made up 11% or so? OK, I am supervisorial type, so let's add them in to raise it to 30%. Less than 1/3 of the interviewees are the doers, in the trench people?

I would venture to say if you were to ask only the 30% with the same company's, country and counts, the results of this survey would be FAR different. Sadly, these are the people we work for. It explains why breaches like:

Target, Neiman Marcus, Michael, Specs, a Goodwill, PF Changs, Sony, TJX, etc.. Etc.. Etc... Keep happening and take so damn long to detect. The 60% need to attend more local ISSA, OWASP, NAISG, InfraGard and ISACA meetings and attend more BSides and other local conferences and read more from people like us that share what you actually DO need to do to cut down on breaches and the costs associated with them.

#InfoSec #HackerHurricane

Thursday, July 24, 2014

Proof our industry is broken




I just read the latest Ponemon survey "Exposing the Cyber Security Cracks: A Global Perspective" and I had to laugh. The email started out with "30% of organizations would overhaul their security solutions'. My first thought, just 30%? It should be 90%, because I think only around 10% of us are good enough to catch an APT attack the report covers within an hour or worst a day. The average detection of a breach being 200-400 days, depending on what report you read and in 90% of the cases told to you by a 3rd party.

If you can't detect and respond to a breach within 1-24 hours, you are doing it wrong.


So why is this funny? Well, in figure 2 of the report it lists "Advanced Persistent Threat" as the top concern at 40%... 24% say a "Data exfiltration attack". If a "complete overhaul of the system" sits at 29%, and 22% say no changes needed, we are awesome, and 13% say nothing because they can't stop it.. Our industry is seriously broken. No way 22% can detect an APT attack within 1-24 hours. No report I have ever read of companies that have been breached has ever stated 22% of the companies surveyed successfully detected their attack with 24 hours and we are just validating their findings...

The never ending list of breaches and the length of time these companies were infected should tell us two simple facts, 1. Companies suck at detecting and responding to breaches and 2. Many companies still don't know they have been or are breached or don't care and go many many months with the ne'er-do-wells crawling around the network. My credit card just got compromised, and I suspect another retailer will announce a breach that I shopped at, when someone tells them they have been breached...

APT cannot be prevented, you will get popped, it is just a matter of when you will detect it, or in 90%+ of the cases told by a 3rd party you have been breached... Even worse, you failed to detect it and some one had to tell you.

Detection is not hard, it's just a change in mindset. A BIG change in mindset since so much of our InfoSec industry and management believe a "Prevention solution will save me", plug in the latest appliance. What will save you is people. People that know how to use tools, know and understand the limitations of the tools, admit they are not effective and retire or replace them, evolve their programs to be detective and good at Incident Response and do things outside the box. And yes, don't spend so much time and effort on compliance. People that I wish would be trained at the 3 Con Vegas week and come back significantly better than they left... Maybe defense is not as sexy as exploitation, but it is our day to day job and if you want to keep that job, get better at defense or you will get burned out, canned or be the scapegoat.

OVERHAUL:

Two of the best security tools I use and recommend are not marketed as Security tools at all, but are #1 and #2 on any list of things I would need and insist on to defend a network (Splunk and BigFix or equivalents). Then I would back fill with the necessary and focus on less costly versions of the typical tools we all use/need to be compliant, focusing on low staff impact so they can defend the network the way it needs to be defended. Stop being afraid of tossing a product that just doesn't help you and replace it with something or someone that can. 47% say they are dissatisfied with a product they implemented, 27% say not frequently dissatisfied, so some of the time they are. That is over 50% that don't like what they just implemented and are not tossing it out? Admit that your adversaries have evolved and what use to work can now be bypassed by the ne'er-do-wells and another solution, approach or methodology is needed. Anyone practice the "Malware Management Framework"? (dot Org).

And the two top reasons to replace a security product according to the report? Downtime 73% and difficult user interface 67%... How about the fact it doesn't help solve the 40% APT risk that is your top concern?









I will be attending BSides, BlackHat and Defcon in Las Vegas next month and have gone through the list of talks and discovered that the vast majority of talks are not about helping me defend against ne'er-do-wells. This is a major #FAIL in my opinion as we are not helping educate the masses, we are just WOWing them with what the bad guys could, can, might and are doing to something you may not even have or use.

Cons need to teach more defense, more detection and response. Stuff we can take back and actually do. Lessons learned from those of us that have been their, got burned or succeeded in defending our ASSets. 30% of the 3 Con week in Vegas needs, and I quote " a complete overhaul" to more defense and less offense.

29% in the Ponemon study of 4881 companies in 15 countries with 10 years average experience say they would overhaul! yet with what? how? Where can I learn what's working? I want to learn from those like myself that have lived through and successfully defended against the worst kind of APT attack. Those who have, have much to share, but alas the 3 Con week in Vegas is weak on defensive talks that 90% of us do day to day.

The report goes on to say 52% of companies don't invest in skilled staff... I say, find another gig, we have the lowest unemployment rate in the IT industry. Work for someone who gets it.

Seek out and demand Cons offer up more defensive tracks.

#InfoSec #HackerHurricane #VegasConFail

Thursday, July 10, 2014

(T) Filtering or trimming Windows logs the right way, NOT by Event ID







Have you ever worked with Windows logs and enabled all or a lot of auditing items and feel there are way too many events or noise?

Ever enabled 'Filter Platform Policy Change' to monitor the Windows Firewall connection? This auditing option will add Event ID’s 5156 and 5158 to your logs and quickly be in your Top 10 of all events generated. If you enable success for 'Process Creation' you will get Event ID’s 4688 and 4689. These four Event ID’s will probably be your Top 4 events generated.

Enabling these two auditing items will add a ton of events to your logs. While stored on a systems local disk you won't notice a thing, forwarding them to a Log Management solution you will find they add up and impact disk space and licensing. But they are some of the greatest events Windows has to offer, so use them!

Jedi Tip: Ever wanted to know what IP's and ports a Windows application is using? Maybe to make a change on your enterprise firewall? Use 'Filter Platform Policy Change - success' to see all inbound and outbound connections to and from your Windows Server or Workstation. You can even use this data to refine your Windows a Firewall rules for allowed IP's to an application like a security camera for example or remote access, see my last Blog entry for tips on this one HERE.

So how do you enable a policy item, start collecting log data yet filter the unwanted noise out? Most people do it by disabling auditing of the two items above or excluding by Event ID, which is a terrible way to filter or trim logs. Unless of course the Event ID is truly worthless and none of the events in that ID are useful to you or your admins or dev folks.

If you filter out or disable Windows firewall auditing (Event ID’s 5156 and 5158) for example; then you can't see all the inbound connections to your systems, remote connections by remote IP to the system, track users surfing to IP's, or outbound malware C&C requests. You would be forced to do this at the network layer where you cannot easily hone in on a host and what process is using an IP you are investigating.

If you filter out or disable Process Creation auditing (Event ID’s 4688 and 4689) for example; then you can't see the processes that have been launched on a system, or see what process called another process like CMD.exe calling malware.exe.

Do you need to see or keep ALL of the events of the ID's just discussed? No, you can look at Process Names and Application Names that you deem normal noise and exclude them versus eliminating by Event ID. Google Chrome Update is incredibly noisy log wise, yet probably not needed for InfoSec or forensic investigations. You could toss out GoogleUpdate.exe or Splunk*.exe and reduce your events of the four Event ID's mentioned by 50% give or take saving disk and log management licensing. The image at the top is exactly this filter before and after.

If you are wanting to try the Free version of Splunk at home or in your lab, then reducing events by tossing them out will save on the 500mb per day Splunk eval license restricts you to. Each log solution will have different ways to filter items out or blacklist them from being collected, but never ever do it by Event ID as you can or will lose valuable log data. Unless you are certain the Event ID and all its' events are worthless.

Read the definitive "Windows Logging Cheet Sheet" I put together for Windows logging here for tips on what to enable, configure, gather and harvest.

#InfoSec #HackerHurricane #LoggingDoesntSuck

Monday, June 30, 2014

(T) How to use the Windows Firewall behind a NAT router




Ever wanted to open up a port on your home firewall, but restrict it more than a NAT router allows with raw port 1234 to IP 1.2.3.4?

Restrict it by an actual application, or 1 or 2 remote addresses, not to the whole system from anywhere on the Internet? Or easily change it because of DHCP changes on either local or remote systems? Or to allow a cloud service to send data back to one of your systems? Or allow a Web a console you want to remote into from work to check your Security System?

While playing with Splunk I found the logic of the Windows Firewall behind a NAT router changes things a bit.

The Windows Firewall has 3 Zones, you usually see this PopUp from your browser when you join a new Wireless network on your laptop for example. You will see 'Public' and 'Private' or 'Home' which is also 'Private', there is also 'Domain' for AD attached systems which we generally don't have at home.

When you are behind a NAT router you can ignore or disable everything but 'Private'. Once you open port 1234 to IP 1.2.3.4 on your home router via Port Forwarding, Gaming or whatever your router calls it, all traffic to the a Windows Firewall is seen as 'Private' or inside your network. You can test this by disabling a 'Public' rule you have for a Remote Access for example and it will still work, because it uses the 'Private' rule once NAT is involved.

So everything you will do will be an Inbound Rules and 'Private'. Notice in the image above I have VNC Server installed on my test system and it has 'Public Disabled'. Even though I remote to it over the Internet from a public address, once NAT port forwarding takes over it becomes a 'Private' rule.

Also notice that I have a Splunk Web rule that is also 'Private'. All I have to do now is craft my NAT router to pass any port range I want to the IP I want and then use the Windows Firewall to further restrict it by the remote IP's allowed to the computer or specific application. This allows me to refine my port hole in my NAT router to only the systems I want to remote from and to only the actual application (VNC Server.exe) on that port.

If your home IP changes due to DHCP renewals, then use a Dynamic DNS provider so you can refer to it by name instead of IP. This will require you to run a small utility on your system to send any IP changes to your Dynamic DNS provider.

Play with it and let me know any other tricks you might use.

#InfoSec #HackerHurricane

Monday, March 3, 2014

(T) A look at WHY the now infamous Credit Card breaches should have been detected and how YOU can detect these types of attacks

We are all too familiar with the Credit Card breaches that hit Target, Neiman Marcus, Michaels, Marriott, Holiday Inn, Sheraton, Westin, Renaissance and Radisson, possibly Sears and other retailers yet to be named.

Recently some information on the details of the malware has been released by iSight, McAfee, X-Force and others.  I took a look at the details around BlackPoS and Kaptoxa as I could not believe organizations the size of Target, Neiman Marcus, and Michaels and not mention several hotels were unable to detect such an attack.  So what would it take to “Detect and Respond” to such an attack?

First off, this malware was neither sophisticated nor overly stealthy.  It looks and acts like most malware and APT you would find, see or analyze.  As a part of my “Malware Management Framework program I review malware such as this in order to see if I could or would detect such an attack, similar attack and compare it to what I already see or know in order to tweak my detection capabilities.

First, let’s look at the diagram McAfee had in their report:
As we can see there are multiple Point-of-Sale (PoS) systems connected to the ‘Exfiltrator’ system.  This system apparently sent the CC files via FTP to a compromised Internet server that the criminals then harvested the credit card data from.  By the way, FTP is often used to send transaction reports off the PoS terminals to a central system, so FTP may be normal traffic to many.

It is not clear from the report(s); but one would also assume, for sake of having a starting point that this ‘Exfiltrator’ system, let’s call it ’Patient 0’ is where the attack against the PoS systems originated once the malwarians were in.  We know from Neiman Marcus the malware was deleted every day and reinstalled after a PoS reboot.  This would lend to the fact there is a ‘Patient 0’ system that re-infected the PoS systems as they came back up.  This means that there would be some type of traffic from ‘Patient 0’ to each PoS system and a LOT of it if this occurred daily.  According to reports, almost 60,000 events (the articles say alerts, but I don’t believe that).

With these assumptions and details we can build a pattern of noise that would have been made by such an attack and what we should have, or could do to detect and respond to such an event.  So let’s take a look at how to detect and respond to this type of attack.  I will use Windows 7 versus the Windows XP since most of you are running the latest Windows Operating System (hopefully), but everything discussed still applies to Windows XP, just with a different Event ID.  I will be referencing the latest EventID’s, someone can convert them to XP/Legacy ID’s and I will gladly post them.

TRAFFIC:

1.  FTP – The FTP traffic that went from the PoS system to ‘Patient 0’, or visa-versa should have been detected.  Even if we assume FTP is normal traffic from the PoS, or to the PoS from a central system and even if the proper central FTP system was also ‘Patient 0’ this should have been detected.
a.  Transaction logs are scheduled and deterministic.  It is highly unlikely that the malwarians used the exact same schedule and the exact same central system.
b.  Net Flow should have been used to detect this type of traffic as FTP is in the clear, the data sent by the malware was not, it was encrypted and would have looked like gibberish.
c.  The traffic outbound from ‘Patient 0’ should have been caught as well as it was going to an untrusted source.  All your FTP servers should have known IP’s that it would connect with and any variance alerted to.
d.  Any FTP server should have been configured to collect all logins and activity.  ‘Patient 0’ should have been no different.  This system received FTP transfers from many PoS systems and this behavior should have been detected as abnormal outside the normal transaction log upload time

2.  Port 445 – Windows communicates over port 445 for shares that are mapped from one Windows system to another.  The malware mapped drive “S:” to another Windows system was used by the malwarians.  This traffic should have also been known in a static environment and thus any variance alerted on.

3.  Logs – Windows logs if configured should have seen all kinds of noise from this malware.  If auditing was enabled on the PoS systems or ‘Patient 0’ then noise would have been made, a LOT of it!  Collection of known malicious behavior should have been detected.  Windows 7 Advanced Auditing should be set to ‘success and failure’ for many items.  See another future Blog article for what to set, or better yet come to BSides Austin for the Windows Logging Workshop.  For Windows XP logs, all audit logging should have been set to ‘success and failure’ and collected locally or better yet sent off to a syslog server or SIEM solution.  For the sake of argument and the sheer noise it produces, let’s assume the Windows Firewall is off, so no Packet Filtering logging is enabled.

a.  Execution of the FTP client should have been detected.  Event ID 4688 triggering on ‘ftp.exe’ should be part of malicious tools and activity monitoring.

b.  Connection between systems should have also been detected.  Event ID 4688 triggering on ‘net.exe’ should be part of malicious tools and activity monitoring.
















c.  The PoS system when connected to a share should have also been detected.  Event ID 4624 triggering a ‘Logon Type 3’ network logon should be part of malicious network activity monitoring.


















 

d.  The system the PoS connected to should have had a share connection detected.  Event ID 5140 would list the share ‘Test’ was connected and what IP address did the connection and should be part of malicious network activity monitoring.








 













e.  The system the PoS connected to should have had a share connection that disconnected detected.  Event ID 4634 would list the ‘Logon Type 3’ was ‘logged off’ and should be part of malicious network activity monitoring.



 








4.  CMD Shell executed – A command shell was used on ‘Patient 0’ as well as each PoS system.
 
a.  Event ID 4688 triggering on ‘cmd.exe’ should be part of malicious tools and activity monitoring.



 









5.  PSExec executed – The use of PSExec is very suspicious and should not be used by anyone except an administrator.

a.  Event ID 7045 triggering on ‘PSEXECSVC.EXE’ or ‘PSEXEC.EXE’ should be part of malicious tools and activity monitoring.














6.  The Malware – The malware files that were used should also have been 
detected in many ways.  The logs at a minimum could/would detect that something malicious was occurring.


a.  Event ID 7045 triggering on ‘Bladelogic.exe’ or ‘POSWDS.EXE’ (dum.exe) should be part of malicious tools and activity monitoring.



 










b.  Event ID 4688 & 4689 should be triggering as the new unknown application started and stopped with the names ‘Bladelogic.exe’ or ‘POSWDS.EXE’ (dum.exe) should be part of malicious tools and activity monitoring.



 












c.  File Auditing is not heavily used by people, but in static environments like a PoS system or file server, or every administrator system you have, it would be easy to setup File Auditing on a handful of directories, or just exclude the noisy ones.  New files being added to \System32 would have triggered the files dropping and the credit card data file ‘winxml.dll’ if File Auditing had been enabled.  Add File Auditing to ‘\System32’, ‘\Drivers’, ‘\WBEM’, ‘\Fonts’, and ‘\twain_32’ and set to audit ‘Create File’ and ‘Create Folder’ for success.



 

















d.  Event Id 4663 would have been triggered in the logs for ‘access an object – WriteData (or AddFile)’ the new files that were added by the malware and should be part of malicious tools and activity monitoring.


 












 





















7.  User Accounts – There is a fair amount of discussion around the accounts used to access the PoS and FTP system(s).  Any good InfoSec program should know what accounts are doing, especially if they are administrative accounts.  Service accounts and administrator accounts have a behavior that is fairly normal.  It is unlikely that the malwarians used the credentials ‘Best1_user’ account in normal ways.  A more likely scenario is that the monitoring of accounts was not being performed.  Successful logins for accounts will tell you more than failed attempts as a compromised account will not generate failed attempts.  The credentials may have been sniffed, cracked or key logged and then used successfully, maybe even ‘a user account was created’ which is ‘Event ID 4720’.

a.  Event Id 4624 would have been triggered in the logs for a ‘New Logon’ and should be part of malicious tools and activity monitoring.























Logging won't catch everything, especially since it must be configured, but there are so many log events I was able to reproduce that it is almost laughable that Target, Neiman Marcus, Michael's and others were unable, or unwilling to detect all this nefarious noise.  Being PCI compliant is not the same as practicing PCI compliance daily.  Compliance is one of the reasons these large breaches occur (IMHO) as we are so busy chasing and filling in compliance paperwork that we fail to actually practice REAL SECURITY.

Well Target, Neiman Marcus, Michael's and others.  I just showed you and many others "HOW", so step up and implement Malware Management and do what many of us already know.

If you are not logging it, you are NOT doing it right.

Other well-known security tools – many well known security tools would have detected the files dropping onto a PoS or Windows system by default.  Tripwire for example would see all the new files in \System32.  Custom configuration management tools would also detect new files in key places.  Carbon Black would have seen the files, processes, Registry keys and command line details from the malware.  The Windows Logging Service (WLS) would have caught the new processes and command line syntax used to trigger the new processes as well as the service being added.  I have only named a few security solutions... No Anti-Virus software would not catch anything like this until long after it was discovered.

If you collected the logs locally and ran scripts to gather log data or used a simple syslog server to grep through the logs, you could have seen this activity.  Of course a robust Log management solution like Splunk or one of the many SIEM solutions would have been another great and recommended method to detect this behavior “IF” properly configured.  Most larger organizations that have compliance requirements like PCI and are required to use an advanced logging solution.

Of course there are many more things we could do to adequately detect and respond to security incidents before they turn into a large “Event” that may have you preparing for “Breach Notification”.  So prepare now and integrate the "MalwareManagement Framework" into your Information security program before it is too late.  

Don’t be a Target!

Want to learn more?  Come to BSides Austin 2014 and take the 4 hour “Windows Logging Workshop”.  Of course watch my Blog for more on Blue Team Detection and Response techniques.

REFERENCES:

McAfee Blog:
  • http://blogs.mcafee.com/mcafee-labs/analyzing-the-target-point-of-sale-malware
McAfee Report:
  • https://kc.mcafee.com/resources/sites/MCAFEE/content/live/PRODUCT_DOCUMENTATION/24000/PD24927/en_US/McAfee_Labs_Threat_Advisory_EPOS_Data_Theft.pdf
IBM X-Force Blog:
  • http://securityintelligence.com/target-data-breach-kaptoxa-pos-malware/
Brian Krebs – Krebs on Security Blog:
  • https://krebsonsecurity.com/2014/01/new-clues-in-the-target-breach/