Friday, December 27, 2013

What The Target Data Breach Tells Us About Credit Processing Flaws

Don’t let Target fool you; the breach of magnetic stripe information from 40 million U.S. credit and debit cards, including debit PINs protected with breakable encryption, is a very big deal. The company’s crisis response has focused on regaining consumer confidence by convincing people that they are protected against fraud. It’s a legal confidence game, one in which the entire retail and financial services industries conspire to instill a sense of security where there is none. They want you to believe that they have our backs and will protect us from fraud. Do so at your own peril.

I don’t know what irritates me more, that we as consumers are so gullible as to place much of our financial health in the hands of companies built solely to extract as much wealth as possible from us or that the credit card data breach was predicted, repeated, and completely avoidable. Ignorance rules on both sides, and the consumer bears the majority of the expense.

Friday, September 13, 2013

How I Learned to Stop Worrying about Security and Love Incremental Development

This is a follow-up to my previous post: Agile Development and Security in Government.

The security authorization processes that U.S. Government agencies implement to comply with guidelines defined by the National Institute of Standards and Technology (NIST) fail to support incremental development methodologies like agile and spiral. Instead, I argue that they promote "big-bang" system releases that are impractical in contemporary budgetary conditions and generally seem to fail more than they succeed. Agency Chief Information Officers (CIOs) can fix the problem, but only if they reinvent their authorization processes and redefine process responsibilities.

Like Dr. Strangelove's bomb, agency Information System Security Managers (ISSMs) generally discount the importance of incremental development to streamlining government. When I met with a security policy executive at one agency to discuss a reconciliation between my project's agile incremental management methodology and the agency's security process, she was none-too-pleased with the notion.

"Projects use agile to bypass security requirements. I will not allow that to happen."

Agile Development and Security in Government

All IT domains continue to make broad use of incremental system and software development methodologies to improve the efficiency of deploying projects small and large. Those methodologies are even extending beyond traditional development to include system integration and program management. When it comes to the U.S. Government, though, there is one aspect of oversight that is preventing managers from making effective use of incremental methodologies: Security. While project teams share some blame by often actively and explicitly discounting security objectives (in my direct experience), I submit that the lion's share of the blame should fall on Information System Security Officers (ISSOs) and Managers (ISSMs).

The National Institute of Standards and Technology (NIST) has also failed to execute its mission to be "responsible for developing information standards and guidelines" in what I would consider a timely and effective manner in relation to incremental development methodologies. But, agency Chief Information Officers (CIOs) can meet legacy NIST guidelines, certify systems developed under those methodologies, and even improve security of their agency system, without running afoul of NIST guidelines, if only they were willing (and able) to make some strategic changes in how they manage system security compliance activities.

Friday, June 21, 2013

The Critical Need for Liberal Arts in Security

"As we strive to create a more civil public discourse, a more adaptable and creative workforce, and a more secure nation, the humanities and social sciences are at the heart of the matter, the keeper of the republic - a source of national memory and civic vigor, cultural understanding and communication, individual fulfillment and the ideals we hold in common."

Security professionals often state that security is an art, not a science. This field demands a certain degree of finesse, elegance, imagination, creativity, and a find-grained understanding of technology. We characterize the act of securing assets and information as finding the right balance between people, process, and technology, the security triumvirate. Yet, look at any job posting in security over the past 15 years (about the duration of time that I've worked in the field), and you find this:

Education: Degree in Computer Science, Mathematics, or any comparable field.

Friday, May 24, 2013

Two-Step Verification (2SV) is not Two-Factor Authentication (2FA)

This week, Twitter became the most recent online service to move to 2-Step Verification (2SV). One high-profile intrusion recently sent stocks spiraling when an attacker posted false news of a White House bombing after gaining access to the Associated Press Twitter account (@AP) through a successful phishing attack. While Twitter had been reportedly working on a new authentication solution, the AP event likely accelerated those efforts.

Following Twitter's announcement, the media and supposed security industry pros once again continued to perpetuate confusion over what constitutes "Authentication" versus what constitutes "Verification." Bloggers over at CNET provide two fine examples of this confusion just yesterday in response to the Twitter news. First, at 2:44 PM PDT on May 23 (time stamped as of 5:00 AM PDT on May 24), Jason Cipriani posted, How to use Google Voice with two-step authentication. Shortly thereafter, at 5:29 PST (time stamped as of 5:00 AM PDT on May 24), Seth Rosenblatt posted, Two-factor authentication: What you need to know (FAQ). Jim Fenton, the Chief Security Officer for OneID, a company that doesn't even address either 2FA or 2SV, has the industry credentials to seem reputable, but fails to effectively convey the difference between the two methods in his recent posting, Two-factor authentication is a false sense of security.

Look around a little deeper at the companies that are implementing similar solutions, and the vocabulary remains a bit inconsistent. 

Tuesday, April 16, 2013

The (Mis)Perception of Safety

Yesterday, I was fascinated by the fact that my kids were so enthralled by the athleticism and spirit shown by the runners in the Boston Marathon that once they returned home from watching the race, they hosted their own "marathon" in our back yard, complete with number sheets taped to their shirts. They were oblivious to the tragedy that had occurred, innocence retained despite it's loss elsewhere. I tend to think my fascination humanizes me a little.

Being a security professional during catastrophic human events such as the Boston Marathon Bombing is a sobering position. At times manic with grief and disbelief, and at others a bit calculating and analytical, I can probably come off as standoffish at best, inhumane at worst. I accept the perception that others have of me but I would argue coming from a reasonable perspective is the best way to counter irrational acts of violence.

Friday, April 12, 2013

Cloud Computing Dangers: Just Forget About It

This is the final posting (Part 10) of the Case Study in Cloud Computing Dangers.

By the end May 15, Day 7 of our outgoing mail Denial-of-Service on Office 365, on May 15, 2012, everything returned to normal. I was thrilled to find my VA email address flooded with test messages from over the preceding week.

Relief. And then, nothing.

We received no update from Microsoft, no communication from, no satisfactory closure of any help desk tickets. Nothing, except for business as usual.

Friday, March 1, 2013

Cloud Computing Dangers: Stand By and Wait

This posting is Part 9 of the Case Study in Cloud Computing Dangers.

It took six days after I detected an outgoing mail Denial-of-Service for Microsoft to publish a public admission that a problem did truly exist. In the contemporary fast-paced IT world, for any problem to take six days to recognize is like waiting to be taken across the river Styx. But, I doubt that Microsoft was working on it's obituary.


Currently Office 365 outbound email servers have a SenderBase reputation of neutral and a score of 0. As a result any policy set to throttle or reject mail from a server rated neutral or with a score of less than or equal to 0 may impact delivery of the mail from Office 365 customers.  

Microsoft currently believes this is due to an instance where a large number of bulk mail messages were sent to a user via a server that contributes reputation information. This mail did not get classified as spam by us, the sender is reputable, but the volumes, combined with Cisco’s rating system, have temporarily reduced our email servers' reputation in their SenderBase service. According to Cisco, it will take time and additional mail flowing through their system to retrain it and restore our email servers’ reputation.

Tuesday, February 12, 2013

Cloud Computing Dangers: A Case of the Mondays

This posting is Part 8 of the Case Study in Cloud Computing Dangers.

We started the business day on May 14, 2012 finally able to send email to the primary contractor on our VA project, but not to the VA email accounts. This development was not an indication that Day 5 represented the end of our outgoing mail Denial-of-Service between our Office 365 cloud service and just about any mail gateway using Cisco devices or any other devices that used to receive SPAM reputation scoring. The organization had simply been shamed (either within or without) into lowering its SPAM blocking threshold to allow any email through that was rated Neutral. Not only was the organization the victim of being unable to receive legitimate email from business partners and clients, it was forced into a making a business decision that would allow more malicious messages to pass through the gateway. It was not a good sign.

Friday, February 8, 2013

Cloud Computing Dangers: Blame When Things Go Wrong

This posting is Part 7 of the Case Study in Cloud Computing Dangers.

When technology problems occur, IT folks will typically focus first on finding a technical solution. It's in our nature because solving technical problems is what we've been trained to do. Waking up on Sunday, May 13 to find ourselves still suffering from an outgoing mail Denial-of-Service on our Office 365 business platform, we were in disbelief that the technical problem still had not been solved. Our challenge was to move past our confidence in understanding the problem's technical nature and to recognize that we were falling victim to a broader issue of being unable to assign responsibility in a massively distributed communications system.

Friday, February 1, 2013

Cloud Computing Dangers: False Hope

This posting is Part 6 of the Case Study in Cloud Computing Dangers.

On Saturday, May 12, as my company continued to suffer from an Office 365 outgoing mail Denial-of-Service, I woke up to an email a colleague sent me from the primary contractor that we were unable to communicate with. A test message that I had sent at 3:33 PM on Thursday, May 10 had been received at 2:24 AM Saturday morning. Despite a transit time of just under 36 hours, I was elated to discover that a message had gotten through. Perhaps Microsoft was really true to its word and we could expect to have the problem resolved soon so that we could move on with our lives. Or, perhaps it was just a fluke since I hadn't seen any other messages get through.

Saturday, January 26, 2013

Cloud Computing Danger: Seeking Problem Clarity

This posting is Part 5 of the Case Study in Cloud Computing Dangers.

After establishing the legitimacy of our outgoing mail Denial-of-Service the morning of May 11, we expected Microsoft to resolve the issue by the end of the day. Since it was related to some SPAM condition associated with the Office 365 outgoing mail gateways, Microsoft should be able to rally its resources to quickly address the technical problems and enable us to re-establish communications with our largest customers. We were overly optimistic.

Wednesday, January 16, 2013

Cloud Computing Dangers: Pointing the Finger

This posting is Part 4 of the Case Study in Cloud Computing Dangers.

All businesses face significant IT challenges but they are far more insurmountable for small businesses with limited resources with which to tackle them. Cloud computing in any form, be it Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), Software-as-a-Service (SaaS), or WhateverYouImagine-as-a-Service (WYIaaS), promises to level the playing field by providing small businesses a level of enterprise support that they couldn't possible retain individually, all at a "low" regular subscription fee (at least lower than the alternative CapEx/OpEx values). With the level of support that a small business receives from a large organization such as Microsoft, the business should reasonably expect to have a much more available and resilient resource than it could expect of itself. Most business executives can easily see the benefits and are generally eager to sign up.

As someone who has run an IT operations group, I can tell you that IT people immediately blame the user when the user reports a problem. Perhaps its driven by pride in the environment that they maintain or by some sense of self preservation. For whatever reason, the user is wrong until proven right. You can see the results of this in large business help desks that immediately try to pass you off to an online "knowledge base" or threaten you by offering to "take away your computer" to examine the problem deeper. If the problem is an outlier, then it is more likely related to the user than to the system or application. That culture of denial is enhanced in a cloud environment where the service provider knows how to run the system much better than any individual user, so if it doesn't detect a problem, then there is no problem.

Sunday, January 13, 2013

Cloud Computing Dangers: Establishing Responsibility

This posting is Part 3 of the Case Study in Cloud Computing Dangers.

At around 4:30 PM on Wednesday, May 9, I was preparing to make the trek from my VA site location near DC's Union Station to my home in Fairfax City, VA. For anyone who isn't well versed in the journey, understand that it is something that you really need to psyche yourself up for. It wasn't uncommon for me to lose 90 minutes of my life making just the one way trip over the course of just 17 miles. Doing the math, I could travel at just a little over 11 miles per hour, covering a mile in perhaps 5 minutes. Knowing that you will never get that time back, that most of the time you'll be staring at dozens or hundreds of taillights, that you could probably cover the distance faster by bike if you didn't have to wear a suit, is an excruciating fall from innocence that I would promote as the contemporary definition of madness. You have to develop a dissonant optimism to keep from just barreling through a crowded street in a moment of temporary relief. "Maybe it won't be that bad today." "My kids will thank me some day for working so hard." "I'll be able to make soccer practice…no problem."

Jason and I both knew how critical our email communications were for maintaining business continuity. As a small business with less than a dozen revenue-producing employees, our position was tenuous and depended on the perception of always being present, available, and responsive. This problem had cut off our communications with our two largest revenue generators, representing over half of our active business, and with a contractor with which we were working on several proposals. We had to solve the problem, and fast. It seemed obvious to me that I should just break out my iPad and troubleshoot while navigating DC/Northern VA traffic. When Jason realized what I was doing, he simply cautioned, "Please don't kill yourself over this." At least I was able to justify not riding a bike to the office for another day.