Showing posts with label Incident Response. Show all posts
Showing posts with label Incident Response. Show all posts

Sunday, February 23, 2014

Identity Theft: Be Prepared for the Long Haul

Nearly a month after first detecting a potential identity theft when reviewing my credit reports, I’m frustrated by the lack of progress despite my efforts. A recent email from Experian, the credit bureau that seems to be the source of my problems, highlighted the company’s refusal to remove what I believe is the root cause record on my report. Just when I thought I was entering the final phase of cleaning up my credit report, I came to realize that I’m probably just getting through an early chapter in what will be a much longer story.

Saturday, February 15, 2014

Identity Theft: Proof that Life is not Fair

I spent a weekend fuming over the fact that my credit reports from two bureaus showed a fraudulent collection from Dish Network and several personal information entries that listed names, addresses, and phone numbers on my report that were not mine. There were several possibilities for the entries: 1) The bureaus screwed up; 2) Someone fat-fingered my social security number when providing credit for Dish Network service; 3) Someone had fraudulently used my social security number. No matter how little control I had over the initial event, if I wanted clean credit reports, I knew that no one was going to help me out.

Sunday, February 2, 2014

Identity Theft: Guilty Until Proven Innocent

“What is your identity?” It’s more than just an existential question, it’s a question that you need to ask yourself when addressing a potential identity theft situation. To be more precise, you have to ask yourself, “What is it that identifies you?” To begin the recovery process once you detect an identity theft, something that I discussed recently in relation to my own issue, you have to be able to provide documentation that assures everyone involved that you are who you say that you are. Perhaps even more important is the inverse, that you need to be able to show that you aren’t who you say you aren’t.

Monday, January 27, 2014

A Victim of Identity Theft?

I believe that I am the victim of identity theft.

At first, I didn’t think much of it. Perhaps my understanding of how personal data flows and security drove me to discount what it was I was seeing as “really no big deal.” Or, maybe I have become so cynical about how the definition of identity theft has expanded to include acts that I wouldn’t naturally consider a “theft” that I disregarded the event. Whatever the root cause of my denial, I’ve moved on. It’s time to deal with the problem and I plan to share my experiences every step of the way.

Friday, December 27, 2013

What The Target Data Breach Tells Us About Credit Processing Flaws

Don’t let Target fool you; the breach of magnetic stripe information from 40 million U.S. credit and debit cards, including debit PINs protected with breakable encryption, is a very big deal. The company’s crisis response has focused on regaining consumer confidence by convincing people that they are protected against fraud. It’s a legal confidence game, one in which the entire retail and financial services industries conspire to instill a sense of security where there is none. They want you to believe that they have our backs and will protect us from fraud. Do so at your own peril.

I don’t know what irritates me more, that we as consumers are so gullible as to place much of our financial health in the hands of companies built solely to extract as much wealth as possible from us or that the credit card data breach was predicted, repeated, and completely avoidable. Ignorance rules on both sides, and the consumer bears the majority of the expense.

Tuesday, April 16, 2013

The (Mis)Perception of Safety

Yesterday, I was fascinated by the fact that my kids were so enthralled by the athleticism and spirit shown by the runners in the Boston Marathon that once they returned home from watching the race, they hosted their own "marathon" in our back yard, complete with number sheets taped to their shirts. They were oblivious to the tragedy that had occurred, innocence retained despite it's loss elsewhere. I tend to think my fascination humanizes me a little.

Being a security professional during catastrophic human events such as the Boston Marathon Bombing is a sobering position. At times manic with grief and disbelief, and at others a bit calculating and analytical, I can probably come off as standoffish at best, inhumane at worst. I accept the perception that others have of me but I would argue coming from a reasonable perspective is the best way to counter irrational acts of violence.

Friday, April 12, 2013

Cloud Computing Dangers: Just Forget About It

This is the final posting (Part 10) of the Case Study in Cloud Computing Dangers.

By the end May 15, Day 7 of our outgoing mail Denial-of-Service on Office 365, on May 15, 2012, everything returned to normal. I was thrilled to find my VA email address flooded with test messages from over the preceding week.

Relief. And then, nothing.

We received no update from Microsoft, no communication from senderbase.org/Cisco, no satisfactory closure of any help desk tickets. Nothing, except for business as usual.

Friday, March 1, 2013

Cloud Computing Dangers: Stand By and Wait

This posting is Part 9 of the Case Study in Cloud Computing Dangers.

It took six days after I detected an outgoing mail Denial-of-Service for Microsoft to publish a public admission that a problem did truly exist. In the contemporary fast-paced IT world, for any problem to take six days to recognize is like waiting to be taken across the river Styx. But, I doubt that Microsoft was working on it's obituary.

Cause

Currently Office 365 outbound email servers have a SenderBase reputation of neutral and a score of 0. As a result any policy set to throttle or reject mail from a server rated neutral or with a score of less than or equal to 0 may impact delivery of the mail from Office 365 customers.  

Microsoft currently believes this is due to an instance where a large number of bulk mail messages were sent to a user via a server that contributes reputation information. This mail did not get classified as spam by us, the sender is reputable, but the volumes, combined with Cisco’s rating system, have temporarily reduced our email servers' reputation in their SenderBase service. According to Cisco, it will take time and additional mail flowing through their system to retrain it and restore our email servers’ reputation.

Tuesday, February 12, 2013

Cloud Computing Dangers: A Case of the Mondays

This posting is Part 8 of the Case Study in Cloud Computing Dangers.

We started the business day on May 14, 2012 finally able to send email to the primary contractor on our VA project, but not to the VA email accounts. This development was not an indication that Day 5 represented the end of our outgoing mail Denial-of-Service between our Office 365 cloud service and just about any mail gateway using Cisco devices or any other devices that used senderbase.org to receive SPAM reputation scoring. The organization had simply been shamed (either within or without) into lowering its SPAM blocking threshold to allow any email through that was rated Neutral. Not only was the organization the victim of being unable to receive legitimate email from business partners and clients, it was forced into a making a business decision that would allow more malicious messages to pass through the gateway. It was not a good sign.

Friday, February 8, 2013

Cloud Computing Dangers: Blame When Things Go Wrong

This posting is Part 7 of the Case Study in Cloud Computing Dangers.

When technology problems occur, IT folks will typically focus first on finding a technical solution. It's in our nature because solving technical problems is what we've been trained to do. Waking up on Sunday, May 13 to find ourselves still suffering from an outgoing mail Denial-of-Service on our Office 365 business platform, we were in disbelief that the technical problem still had not been solved. Our challenge was to move past our confidence in understanding the problem's technical nature and to recognize that we were falling victim to a broader issue of being unable to assign responsibility in a massively distributed communications system.

Friday, February 1, 2013

Cloud Computing Dangers: False Hope


This posting is Part 6 of the Case Study in Cloud Computing Dangers.

On Saturday, May 12, as my company continued to suffer from an Office 365 outgoing mail Denial-of-Service, I woke up to an email a colleague sent me from the primary contractor that we were unable to communicate with. A test message that I had sent at 3:33 PM on Thursday, May 10 had been received at 2:24 AM Saturday morning. Despite a transit time of just under 36 hours, I was elated to discover that a message had gotten through. Perhaps Microsoft was really true to its word and we could expect to have the problem resolved soon so that we could move on with our lives. Or, perhaps it was just a fluke since I hadn't seen any other messages get through.

Saturday, January 26, 2013

Cloud Computing Danger: Seeking Problem Clarity


This posting is Part 5 of the Case Study in Cloud Computing Dangers.

After establishing the legitimacy of our outgoing mail Denial-of-Service the morning of May 11, we expected Microsoft to resolve the issue by the end of the day. Since it was related to some SPAM condition associated with the Office 365 outgoing mail gateways, Microsoft should be able to rally its resources to quickly address the technical problems and enable us to re-establish communications with our largest customers. We were overly optimistic.

Wednesday, January 16, 2013

Cloud Computing Dangers: Pointing the Finger

This posting is Part 4 of the Case Study in Cloud Computing Dangers.

All businesses face significant IT challenges but they are far more insurmountable for small businesses with limited resources with which to tackle them. Cloud computing in any form, be it Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), Software-as-a-Service (SaaS), or WhateverYouImagine-as-a-Service (WYIaaS), promises to level the playing field by providing small businesses a level of enterprise support that they couldn't possible retain individually, all at a "low" regular subscription fee (at least lower than the alternative CapEx/OpEx values). With the level of support that a small business receives from a large organization such as Microsoft, the business should reasonably expect to have a much more available and resilient resource than it could expect of itself. Most business executives can easily see the benefits and are generally eager to sign up.

As someone who has run an IT operations group, I can tell you that IT people immediately blame the user when the user reports a problem. Perhaps its driven by pride in the environment that they maintain or by some sense of self preservation. For whatever reason, the user is wrong until proven right. You can see the results of this in large business help desks that immediately try to pass you off to an online "knowledge base" or threaten you by offering to "take away your computer" to examine the problem deeper. If the problem is an outlier, then it is more likely related to the user than to the system or application. That culture of denial is enhanced in a cloud environment where the service provider knows how to run the system much better than any individual user, so if it doesn't detect a problem, then there is no problem.

Sunday, January 13, 2013

Cloud Computing Dangers: Establishing Responsibility

This posting is Part 3 of the Case Study in Cloud Computing Dangers.

At around 4:30 PM on Wednesday, May 9, I was preparing to make the trek from my VA site location near DC's Union Station to my home in Fairfax City, VA. For anyone who isn't well versed in the journey, understand that it is something that you really need to psyche yourself up for. It wasn't uncommon for me to lose 90 minutes of my life making just the one way trip over the course of just 17 miles. Doing the math, I could travel at just a little over 11 miles per hour, covering a mile in perhaps 5 minutes. Knowing that you will never get that time back, that most of the time you'll be staring at dozens or hundreds of taillights, that you could probably cover the distance faster by bike if you didn't have to wear a suit, is an excruciating fall from innocence that I would promote as the contemporary definition of madness. You have to develop a dissonant optimism to keep from just barreling through a crowded street in a moment of temporary relief. "Maybe it won't be that bad today." "My kids will thank me some day for working so hard." "I'll be able to make soccer practice…no problem."

Jason and I both knew how critical our email communications were for maintaining business continuity. As a small business with less than a dozen revenue-producing employees, our position was tenuous and depended on the perception of always being present, available, and responsive. This problem had cut off our communications with our two largest revenue generators, representing over half of our active business, and with a contractor with which we were working on several proposals. We had to solve the problem, and fast. It seemed obvious to me that I should just break out my iPad and troubleshoot while navigating DC/Northern VA traffic. When Jason realized what I was doing, he simply cautioned, "Please don't kill yourself over this." At least I was able to justify not riding a bike to the office for another day.

Friday, October 26, 2012

Cloud Computing Dangers: Incident Detection

This posting is Part 2 of the Case Study in Cloud Computing Dangers.

At around Noon U.S. Eastern Daylight Time (EDT) on Wednesday, May 9, I forwarded a calendar invite from my corporate account to my VA address. The message included some important attachments that originated from a prime-contractor colleague. I also responded to several email messages from the same colleague, sending mail to both his corporate and his VA accounts. Everything that seemed to have worked fine a few minutes prior was about to blow up in my face.

A Case Study in Cloud Computing Dangers

"A cloud computing approach could save 50 to 67 percent of the lifecycle cost for a 1,000-server deployment." Kevin Jackson - Forbes.

It's not hard to understand why business executives are completely intoxicated by cloud computing.  For the uninitiated, cloud computing essentially allows organizations to outsource just about any IT processing to a third party. If you need new servers, then you can just go to Amazon to quickly and cheaply procure new server capacity that's available immediately. Sick of managing your internal email system? Go to Microsoft to get Exchange email, calendaring, instant messaging, and SharePoint with the click of a button. Want to gain access to enterprise-class back office accounting and support system? Check out Google Apps for Business and all of the add-ons that it makes available. An organization can get instant satisfaction by moving to the cloud while paying a small fraction of what it would cost to procure the equipment, software, and people to do it all internally.

Sounds great, right?  Look closer and you may not be so convinced.