I saw a message recently that DoD isn’t happy with the quality of training provided to those folks who, under DoD 8750, needed to gain certification depending on their position and functional areas. The complaint is that these certifications (and associated training/education) are producing folks who can talk the talk about security, but can’t walk the walk.
As an educator involved with information security training and education, I’m concerned with these results (perceptions). This not the 1st time I’ve heard these complaints: the head security guy at a large networking company told me that their biggest need was for people who both understood security and were capable of “system administrating” servers and the like.
Some certifications require hands-on demonstration of skills and knowledge (CCIE is one that I can think of off the top of my head): I’m sure there are others. As far as security degrees and certifications go, too many of them are based on writing papers and passing exams — the difference between knowing “that” and knowing “how.”
Some jobs may only require that you know that, and not how. I may not know the details of how to configure a Cisco switch nor a router … but I should be able to whiteboard an answer to the protocol interactions of a machine on subnet A sending a UDP packet to a machine on Subnet B through an intervening switches and router. But I should be able to say, when asked a security questions, about whether a security control would be implemented best in the switch, the router, both or neither — and be able to justify that answer.
You know it must be spring when the value of certifications discussion heats up again. I recently read yet another ariticle debating this issue, and I was struck by the following question by a CIO: “Who would you rather hire? Someone who got their CISSP in 2007 [candidate 1], or someone who got it in 2012 [candidate 2]?”
Well, that got me to thinking. Let’s assume that both candidates have a current certification (ISC2 requires that you earn 120 CPEs [Continuing Professional Education credits] in the course of three years, with a minimum of 20 per year (no cramming everything into the last year). ISC2 requires a mix of education and job-related skills in order to achieve the full certification (you may sit for the exam and receive a provisional certification contingent on obtaining the work experience as you go). So candidate 1 met the education/experience requirements in 2007 and has accumulated 120 CPEs over the last 5 years (more likely 160, since she would had to accumulate 20 CPEs per year when her certification renewed in 2010).
Now consider candidate 2. To get the full CISSP, he would have had to meet the education/experience requirements as well. Either he already met the requirements (meaning he has had related job experience), or he has gained that experience over, say, the last 4 years.
So, how to choose? The answer seems clear to me: what have each of these candidates been doing over the last, say, four years? Does their experience match your job requirements? What do you value in a candidate? Point is: unless you consider their experience in the interim, knowing when they got their CISSP certification is a pretty meaningless piece of information. You can’t assume that the 2012 candidate is fresher and more up-to-date than the 2007 candidate. The best you can do is to take into account that both individuals showed the initiative and drive to prepare themselves and to sit for the examination. After that, you need to look more closely at each candidate in light of their experience, personal characteristics, and the key attributes that you’re seeking in an employee for the postition you’re offering.
But you knew that already. Didn’t you?
I’ve been kicking around the idea for a while now that reverse engineering is the skill shared among researchers, crackers, malware analysts and security testers. A recent conversation with a colleague, however, convinced me that the focus needs to be on RE in the large, since the actual contents of the executable may forever remain opaque. Software doesn’t operate in isolation: computing resources are used, network resources are used, modifications are made. Instead of analyzing the code, watch the behavior, and this includes libraries loaded, files accessed, files created, network resources used, registry entries, etc. etc. etc. Locard’s Exchange Principle still applies: somewhere the attacker will leave a mark.
It’s a truism that many companies, after creating an initial security policy, fail to either enforce or amend that policy. Standing pat in the technology field is a recipe for creeping disaster. Witness the recent efforts by Microsoft to wean trailing-edge users from IE6. Old, creaky, bug-ridden, flaw-riddled as it may be, some organizations are grimly hanging because key applications will work with no other browser version.
This in itself might not be as bad if the use of this browser was restricted only to the corporate networks. Perhaps this was the original intent of the original security policy. But the policy in effect seems to have been written for desktop systems that never left the confines of the corporate office.
Consider the following. A company (we’ll call it BeanCo) is still distributing Windows XP, SP3 as it’s standard image for company-supplied computers (it’s ironic to see the XP login screen while looking at the Intel I-5, suitable for Windows 7 logo pasted on the front of the laptop). One of the things that the Windows XP firewall lacked was the ability to sense when the user had connected to an undefined (home/work, other) network, and ask what policy should be applied. If the answer was “big bad Internet,” certain capabilities (like file and printer sharing) were turned off.
Not so in the BeanCo standard release. All Windows sharing services are enabled by default. So turn them off at the firewall, you say. Ah HA! Not so fast, Bunky! Turns out that the group policy doesn’t allow a user to manipulate the firewall rules. Bazinga! Hoist on your own petard.
Now, it might be the case that the IT group at BeanCo assumed that the employee grabbing a quick latte at Starbucks in the morning who wants to check their corporate email, would fire up the corporate VPN client and join the corporate network, implying that the availability of these services would be protected by corporate security products. Of course, these services haven’t been disabled on the local WiFi network. On the other hand, corporate email is available via the Internet using (you guessed it) IE6. No VPN there. And what might happen in the presence of a targeted phishing email sent to the employee’s corporate address? Surfing to that address whilst in said coffee-shop doesn’t provide any of the corporate protection. Just you, your creaky old IE6, and a too-permissive set of firewall permissions.
It can cost a lot to keep up with the Joneses. It can cost even more if you don’t.
NIST just published a new report, The NIST Information Security Glossary of Key Information Security Terms. At 211 pages, it’s hard to think that something is missing, and yet … One nice thing about this doc is that it tracks the definition back to a source document, so you can read the original if you so desire. Now you can learn to talk correct, just like the real security x-pertz do.
In a post on the VMware Community’s blog, AllwynSequeira had this to say
As we begin to deploy such hybrid clouds, we need to tackle several issues, even in the infrastructure layer, let alone higher level PaaS and application stacks. For example, networking topologies and architectures start to come into play. It is one thing to create air-gapped silos in enterprises, where network segmentation via VLAN/subnet delineation and hair-pinned firewalls, realize separate zones of trust. The holy grail of public cloud infrastructure is creation of banks of compute and storage resources on a fast converged fabric interconnect, and then being able to instantly allocate secure, elastic VDCs for enterprises to place their VM collections into. In this environment, there is a need for a programmable fabric, wherein trust zones are fungibly constructed around VM/storage collections, regardless of underlying network topology.
Easier said than done.
I couldn’t agree more. As an old network guy who now does security, the challenges of integrating such networks can’t be overstated, and will require the co-ordination and co-operation of virtualized network folks and virtualized computing folks. The vNet meets the vDataCenter.
And how do we teach this subject of virtualization and security?
I’ve chosen to run the following Firefox add-ins as a nod to better security and privacy while browsing. This list isn’t exhaustive, and there are certainly others. These are the ones I use as of Valentines’ Day, 2011.
- Adblock Plus
- BlackSheep (only on machines that have a chance of leaving my home and/or office)
- Flashblock — in addition to NoScript
- HTTPS-Everywhere — in response to FireSheep.
- NoScript — the big one.
- Ghostery — who’s tracking me.
- BetterPrivacy — watch out for Flash LSOs
Ok, maybe this is “safer surf” instead of “safe surf”, but I think it will do.
My friend and colleague at Brandeis University, Ramesh Naggapan, wrote an interesting blog on Firesheep, BlackSheep, and HTTPS Everywhere. Well worth a read, and I heartily recommend HTTPS Everywhere. A nice little hack that involves URL re-writing for particular sites.
In addition, remember that network access over a public WiFi network (unencrypted) does put your communication at risk unless you’re either using a VPN to a remote location, or you are using an SSL-supported protocol such as HTTPS (or POPS, etc.) that encrypts your communication between your computer and the remote server. So, mehson, “let’s be careful out there.”
Two things are starting to bug me.
First is loading content from a site using HTTP-S that also uses HTTP data. In IE, at least, this results in an annoying pop-up that asks: do you want to see content from this site that wasn’t delivered securely (or something like that). Every single time. Please, Web designers: take this into account when you’re designing your site.
Secondly, this is only exacerbated by promiscuous use of some sites to provide content. The resulting “mash-up” is like the proverbial cluster-phuX. Consider one site in particular: www.boston.com. I use AdBlock, NoScript. and Flashblock when browsing with Firefox. When I access boston.com I am asked to trust content from 12 sites. Usually it takes between 3 and 4 mouse-clicks (“temporarily allow”) for me to get a complete Web page.
As drive-by attacks via compromised Web server become even more prevalent, I for one worry about these “mash-ups” of content (which has only been made worse with the advent of social networks [tweet/friend/mySpace/blog me]. I belong to the generation that still holds out some belief in privacy and a certain degree of anonymity in Web browsing (I’m OK with boston.com knowing about my page visits, but outbrain.com?) Not so much.
One final anecdote. I was using the thesaurus at Merriam-Webster (m-w.com) on day at work, when I got bounced out to the security page indicating that my access to a particular Web site had been blocked. Turns out m-w.com was loading advertising pages from a site that had purportedly been used to supply malware.
You can look it up.