The Next Wave – Preventive Security and Statisticians

Over the past couple weeks I have concluded that enough (bad) breath has been spent ranting about how system and security auditors really are missing the mark. However, one cannot reasonably just point a finger in one direction – it takes two to tango, so it is now time to point out what CIOs and administrators of secure environments should start to consider in order to prevent incidents. And along the way add a rant or two about how the average CIO (too) is an administrative paper-pushing, policy guru that does not really have real systems administration experience – most come from a consulting background and have not had to own a system for more than a year, and not ever even have hands-on experience. Even more amazing, and I see this all the time when we go to propose on PCI projects, are the number of CIOs that really do not know their network architecture. Just as a CPA is now required on the board of every corporation as a result of SOX, a CIO with a minimum certification should be required for enterprises greater than a certain size.

Okay, okay, will hold back on ranting before covering some of the things that are really informative…

First, this article from Network World goes into some detail about how “the hackers” (from China) managed to get into source code repositories and transfer code over Google’s own WAN to sites in China, then successfully transferred the code via local connections. Does anybody besides me think that … hmmm. Let’s go first-person: If I was in charge of security at Google – basically a software company – wouldn’t one of my biggest priorities be to make sure the source code management systems were secure? Or, a better idea would be to split security into domains – internet security for the services offered, development security for source protection, test validation, and release control, then internal security that focuses on internal threats, training, and awareness. Well, it’s not just Google. A couple other companies – that should have already learned these lessons in Intel’s case, and Symantec is in the security business. These companies being this vulnerable and getting taken for loads of source code, or having existing source code changed, should be a bigger shock than the fact that the Chinese government may be backing the whole incident. Where does this take us? Back to the preventive security argument. Preventive security measures would have prevented all (okay, at least most) of this mess point-for-point.

First, training and awareness would have prevented Google employees that were not using Chrome or Firefox from starting Internet Exploder and getting phished in the first place. In this day and age, the targeted attacks use a variety of methods, but the one sure-fire method of late is spear phishing, which is outlined in detail here and here. Our employees and myself have been the target of a couple of these attacks recently. Some of these emails are so well crafted that shivers go up your spine – they know where you work, functional department, work email, and other details. This is a major upgrade to traditional phishing in the sense that the language is a very fluent, official English and unless you are careful, can be convincing. In my case, the attacker knew that I could read Japanese and sent a rather fluent Japanese message with a fluent English follow-up a day later. This level of awareness needs to be taught, reminded, posted on corporate internal banners in break rooms, and made a part of a current and ongoing awareness program.

Second, periodic measurement of certain environment variables would have probably picked up on the code transfer across the Google WAN. Generally, IT and security management fails to appropriately manage their environments with appropriate measurement. In fact, most are tied up and pride themselves in their ‘management’ and people skills, so don’t think they should be a part of the measurement process. Knowing the statistics within your environment cannot be understated, or better, knowing what to measure, why to measure, how to measure, and documenting all of the above, combined with the proper analysis, is one of the best preventive security advances in recent history. In other words security metrics may have saved the day here. A couple good security metrics for a security manager in charge of source code is:
1) number of check-in, check-outs to a CVS system
2) number of check-outs without an associated check-in – number of outstanding check-outs
3) number of check-outs to foreign (or branch) locations
While all of the above do not address a security vulnerability, such as virus definition update metrics, they do assert a risk disposition for source code control. If you are one of those that are afraid of measurement and statistics, start out slow; go to the security metrics link above, then visit the Carenegie Mellon University open and free courses in the Open Learning Initiative. There is a good starter statistics course in there that you could finish in about one to two weeks with just less than an hour a day.
Third, and the most glaring in this whole incident is source encryption and control. In most source code control systems for secure environments, a developer cannot just go to a repository and download a couple gigs of code without some type of higher level authorization. This is so amateur from a security and secure coding perspective that it really begs to be hacked.

Fourth and last, not the least, addresses whether any of this code was actually deployed with back doors – release control and code review. This is one, if not the biggest, flaw in modern software development. We must have somebody that actually knows, reads, and understands chunks of code have the authorization power to release code into environments. I can safely say that over 80% of the banks operating in Japan (including the foreign multi-nationals) have some schmuck named Handa-san with a CISA certification and a title called Information Risk Manager (IRM) that is in charge of signing off on all test results and code releases. That so-called IRM in most cases has never written a computer program nor could begin to decipher a chunk of code from just about any framework in any language. But he signs his name away and when authorities and auditors ask if there is a sign-off, they get the right answer.
Enough ranting……. Hope you enjoyed or found some insight from the sharing or links. On a personal note….
Lately I have been joining a techie group of Japanese for a Sunday night radio show on Radio Tsukuba. Tsukuba University is the MIT equivalent here in Japan, so the audience and participants are as eccentric. The broadcasts are in Japanese and the recordings are here. The team there has also asked if I can do a three to five minute sideline on technical English or useful English pointers for ham radio operators – which is what I’ll start working on in a few minutes… stay tuned. And comment! Retort! Or express yourself in a non-spam fashion otherwise in the comments!

Leave a Reply