Thursday, April 28, 2011


Best Practice
The details are too boring to recount.  Impossibly large amount of records “exposed” due to human error.  Nothing new, same old. 

The only reason to watch is to see how the impact plays out.  It is Texas Politics, after all, and the Lege is in session, and this could prove to be a mild distraction from birthers and budgeteers. 

The data loser in this instance is an elected official, with aspirations to higher office.  Ms. Combs was angling to grab one of the vacant seats when Lite Gov Dewherst runs for US Senate.  So, there’s that.  I doubt many folks enter politics hedging against the risk of career flameout by batch job misconfiguration.  Time to update some campaign risk models. 

The lawsuit loser in this instance has tapped into the type of outrage commonly expressed in writers of comments in newspaper websites  - the "SOMEONEOTTAPAY tiny fist shaking, foot stamping" yadayada.  Sure, they wanna get to the bottom of this for the dignity of the victims.  With no damage, the victims will have a tough road to hoe.  Maybe they are discovering for attack ad quotes.  

At about six minutes in to her interview, we get the biggest loser.  Comptroller Combs says Gartner and Deloitte are on the case to advise on "best practices."  (It looks like Deloitte may be getting a small return on their campaign investment. )  This sort of reaction chafes me to no end, and is an assault on my dignity.  I might be wrong on this, but the evolving SOP for privacy incident response appears to be to spend money willy-nilly on whatever threat is foremost in the populace's mind regardless of the proximal cause of the incident.  One company's reaction to some speed freaks carrying away a safe with a couple of DVDs of data was to air gap their production environment and embark on a FISMA compliance project.   This firehose approach appears to be designed to make the potential victims feel better, I guess, but only enriches the best practitioners and "safe bet" consultants.   To me, it just seems a waste, and decreases my confidence in the competence of the organization.    

And, to quote the Comptroller, "oh my gosh, think of Sony... and think of you grocery store loyalty card."  

Well, at least country music is alive and kicking every night south of Round Rock, Texas. (The sight of a youthful Dale Watson and the State Capitol restores a measure of my Texan dignity.  That, and Chicken Shit Bingo.)

Best Practices in Risk Management Image courtesy of KoryeLogan.

Monday, April 25, 2011

Up Yours

Nice metric courtesy of Grits - the costs of false alarms.  And the casualties found at the intersection of reliable metrics and public policy. To quote Grits:

But as [Former Dallas Police Chief] Kunkle says, this is an instance where tuff-on-crime politics interferes with good public policy and common sense. The small minority being subsidized by police responses to alarms are extremely vocal and well-organized by alarm companies, who have lists with contact info of concerned customers that would be the envy of any political consultant. Plus, those with alarms almost by definition are relatively wealthier - after all, they got an alarm because they have stuff to steal - and therefore also more politically influential. By contrast, the 86% of Dallasites without burglar alarms who're footing most of the bill are unorganized, unaware of the subsidy, and may not even perceive they have a dog in the fight.
This balance of this conflict is similar to those that are duked out in meeting rooms, with varied stakes and different arguments.
Maybe a similar "verified response" should be assessed consultants or auditors who elevate low impact / low frequency risks up to the Board.

Or for the one who turned the risk management dashboard day glo.

Or fought the crisis you can't see.

(So RIP Poly Styrene, unless this is a false alarm.)

Tuesday, April 19, 2011

Audit Drips

I was catching up on the podcast backlog today. I listened for the first time to the Risk Hose, which had a meaty midsection on the internal auditing profession, and whether and how internal auditors assess, analyze and otherwise manage and misconstrue risk.
(A couple caveats. I speak as an internal auditor, with a background in food service and deckhanding. I'm ISACA Platinum, which is more like Centruum Silver than American Express Gold, i.e., it is bestowed upon age. I'm an autodidact when it comes to information risk analysis, but I'm trying to learn.)

Firstly, the standards. The Red Book, or more correctly, the International Professional Practices Framework, includes the following standard (2010 A1)

The internal audit activity’s plan of engagements must be based on a documented risk assessment, undertaken at least annually. The input of senior management and the board must be considered in this process.
So, every internal audit shop has to perform a risk assessment annually, and use it to plan which audits will be performed in the next year.
This type of risk assessment evaluates "audit risk," defined in Sawyer's Internal Auditing (from my raggedy 4th edition, Part 3 Scientific Methods* Chapter 8 "Risk Assessment") as the following:

Audit Risk = Inherent Risk x Control Risk x Detection Risk
A heavy dose of "professional judgment" (also known as "the gut") is used in this method.   The output of this assessment prioritizes the auditable units (chunks of business functions which make up the audit universe), and crank them through the cycle to maintain "coverage."  Purchasing on even years, Accounts Payable on odd, et cetera.  Area with weak controls and lots of potential loss should probably float to the top.  This method is old fashioned even for the conservative internal audit profession, but has the backing of some of the AICPA's more ancient Statements of Auditing Standards.   The resulting assessment is used  internally for audit's planning purposes, and, from talking to my peers in industries without a regulatory mandate to perform risk assessment, it may be the only organization-wide assessment that gets performed.   The methods vary, as do the results.

The recent revisions to the Red Book standards state that internal auditors  "must evaluate the effectiveness and contribute to the improvement of risk management processes."  So a shop that follows standards will be in the business of whoever is performing the "risk management" function, including "information systems."   Internal auditors can't manage risk, but can help assess.

 From my perspective, a lot of internal auditors have a lot of experience in an old fashioned style of risk assessment, and end up with a gut quantification exercise.  There may be some bet hedging, vindictiveness and four tons of politics involved in the process (see above as to who must have input into it), and, in the end, the board will get what it wants.  Quality and sophistication of boards will vary widely, and if they want red, yellow, and green heat maps, by gum they are going to get it.  If they want quant analysis, they'll get that too, especially if there is overlap between the Audit Committee and the Risk Committee.

Personally, it is approaching risk assessment season for my shop, and, with Hubbard and FAIR in hand,  I'm working with our CAE to get together at least some quantitative analysis.  Gotta start somewhere.  I'll get the blame regardless.

*I think I hear a head exploding somewhere.

Wednesday, October 6, 2010

The Professional

An interesting narrative, trapped unfortunately behind a pay wall, comes from the Chronicle of Higher Education - "Chapel Hill Researcher Fights Demotion After Security Breach"

A cancer researcher's database of gets potentially pwnd (two years from incident to discovery), spurring the usual breach notification process.  Her bosses cut the researcher's pay and reduced her status to associate from full professor.  The justification was that she, as principal investigator, was responsible for the security of the personal data entrusted her by the subjects of the study.

The meat from the article (emphasis added):

The provost also accused her of assigning server-security duties to an inexperienced staff member, who failed to install important patches and upgrades, and of not providing the staff member with the training needed. Ms. Yankaskas countered that the staff member, who has since left, had worked for the university's technology office and that the employee never submitted a formal request for additional training.
"I had an employee who I trusted who told me things were OK," she added. "I would have no way to get on the computer and tell if it was secure. Unless I assumed my employee was lying to me, I don't know what I could have done."
Working in the Public Interest
I believe that there is a another option.  Some folks are in charge of security but are not liars, but are incompetent.    And, yes, it is hard to tell them apart.

If it was money that was stolen, and someone said "I have no way of telling if the books were correct.  I trusted the accountant.  He was an experienced bank teller" what would be the response.  Why didn't you hire a forkin' CPA?  CPAs have professional knowledge, and ethical obligations, and if they fail to meet them, you can have their license pulled.  

No so with security folks.  Why is it acceptable to treat for others to manhandle your personal, private data more cavalierly then your  accounting records?  

I'm tempted to start my rant on certification, psuedo-science and "computer forensic professionals" but I'll save it for the next post.   

Wednesday, September 22, 2010

Risk a Harm?

Interesting post and comments on privacy risk from Solove at Concurring Opinions.  Despite being raised by a pack of feral solicitors, I can't claim to understand all the legal theories involved.  I'm attracted to the liquidated damages idea for a number of reasons, including the ability to build a reserve or get underwriting to mitigate potential incidents.  

Harms at Risk

On the other hand, this is where the disclosure rules suck.   For example, an organization loses track of a hunk of physical media that contains a couple hundred thousand records that contain personally identifiable information (but not financial information - no bank or credit card account number).   In this example, there is a very high probability that the media was subsequently destroyed.  Are the individuals identified on the media well served by being notified?  

Imagine there was a method to calculate the likelihood of financial damage to the individual due to the loss of the media.  Lets imagine that there is less than 1% chance that the information will be used in a crime in the next 2 years, and it decreases by half every year that follows.  However, if it is used in a crime, it is likely that the crime will be of a significant impact - a genuine fraud involving a false credentials that would take more than $100,000 for the victim to unravel.   Is notifying the victim of the risk, and making him feel uneasy (since humans perceive risk differently than equations) responsible?  

Or is this just an excuse for me to illustrate a post with a picture of Harms at risk?  

Friday, August 6, 2010

DBR600RR - The Verizoning

I admit I genuinely enjoyed the latest Data Breach Report courtesy the stalwart boffins at Verizon Business.   My personal benchmark of genuineness is derived from my ability to almost immediately put it to use in my job.    Nonetheless, I'd like to see the data hashed up one more way. 

The following quotes from page 14 -

"Though we do not assert that the full impact of a breach is limited to the number of records compromised, it is a measurable indicator of it."

“There is not a linear relationship between frequency and impact; harm done by external agents far outweighs that done by insiders and partners. This is true for Verizon and for the USSS and true for this year and in years past  … We could provide commentary to Figure 9, but what could it possibly add? If a chart in this report speaks with more clarity and finality we aren’t sure what it is.”
I’ll tell you what you can add, cause I’m that way.  And the suggestion comes from the assumption that records=impact. I'm groovy with the assumption that number of records compromised is a measurable indicator for the top three categories of records listed on Fig. 31 on page 41 (regulated data that requires breach disclosure).   However, it seems that an incident that involves the theft of proprietary source code, non-public financial statements, or trade secrets, or whatever else comes under the umbrella of "data breach," is it counted as a single record just as one credit card transaction record counts as one record.  

I'd like to see the PCI DSS and PII/PHI database breaches broken out from the other (information property, trade secret, national security) breaches.  Looking at the data where they are detailed (p 41), there are not a whole lot of them.  Based on the statement on page 18, viz:
”It is worth noting that while executives and upper management were not responsible for many breaches, IP and other sensitive corporate information was usually the intended target when they were.”  
NPI/PII/PHI mandatory disclosure type breaches may be characterized by a different set of threats, impacts, frequencies, and require a differing set corresponding controls than the breaches associated with occupational fraud.   Yeah, I said "fraud" not "insider."  And I'd like to keep on saying "fraud" until I'm comfortable that the internal controls over non-regulated data are targeted at management override rather than external organized crime.  Is organized crime recruiting from the sysadmins and call centers?  Or is the insider a fraud (corruption/breach of fiduciary duty) issue?  Little help and we'll all be safer. 

(I personally believe in Solove's assertion that management should have a fiduciary duty to the privacy of data, but from what I've seen, we ain't there yet, and it is still all about compliance.)

On a side note, the other category of data - authentication credentials - interests me.  Do bad guys just stop at root?  Or do they start at root?  Do the executives/upper management types rely on their organizational credentials, or do they use their authority to con an underling to hand them over?  I've got the anecdotes, but I'd like the data.

Some other comments:
Figure 27 (p38) – People?  A person is a compromised asset and contains records?  I’m not sure I follow the taxonomy (or is it  taxidermy?) here.
P 40 and 41 – Thanks!  These charts help quite a bit in understanding the data.
Fig. 35 (p46) Is not only hard on my eyes, but my brain.   Why is the scale broken into non-proportional time units?  Does the data naturally break down this way? A continuous timeline would give me more confidence how stuff happens.  It tapers off dramatically since each “timespan” is considerably bigger than the previous.  My brain could handle a logarithmic scale, but 60 / 12 / 7 / 4 / 12 / (sideways eight) is kinda hard.  I’m a simple country auditor, dadgummit.   The accompanying text 
“In over 60% of breaches investigated in 2009, it took days or longer for the attacker to successfully compromise data.”  
is not fully illustrated in the graph (to my humble eyes).   Also, it could be more informative.  (e. to the extreme g., my kitchen remodel is taking "days or longer" and yet, three months later, the fridge is in living room.  But my bourbon is appropriately iced!  (This is a footnote, really, rather than a parenthetical, so there you go.))

Good thing it the follow up on page 50 struck me like a diamond, a diamond bullet right through my forehead:
Internal audit methods—both financial and technical—are the bright spot in all of this.
Yeah! Give the auditor some!  

 (Image of Roger Lee Hayden's Moto2 Moriwaki Amerigasm courtesy Motorcycle News, American Honda and USA! USA! USA! because a) it is not wholly unlike a CRB600RR and CBR sounds like DBR, b) all information security can be seen as a metaphor for motorcycle roadracing (technology, engineering, empiricism, piloted by moody irrational egomaniacs who are only in it for the birds & booze) and c) it looks totally awesome!  Porkchop better clean the clock of some euro trash come Indy what with big ol' #34 plastered on the faring)

Wednesday, February 24, 2010

Live Twice

Chandler at the New School made me collect, collate and sort my thoughts on the whole recall issue.  Although what follows is more like bend, fold and mutilate.

The greatest risk Toyotas pose to me is that I get drowsy rolling down the highway with nothing more interesting to divert me than continual rivulet of pale metallic four door boredom. 
Not incongruent to their exterior aesthetics, my personal reaction to the Toyotathon of Death falls in two barrels.
  1. Risk of correctly engineered and manufactured product v. risk of incorrectly engineered and faulty product.   A base assumption in driving a recently produced auto is that, not only will it advance the spark automatically and not require a crank to start, but also that the accelerator will not get stuck open.   If Toyota had labeled one of their transportation appliances with the label “May very rarely yet randomly accelerate,” prudent drivers would familiarize themselves with the emergency stopping procedures.   However, Toyota did not disclose this information until much later, so the information was not available for calculation into a driving risk scenario.  Drivers were operating under a “Toyota quality” assumption.   Would the driver of a Trabant exercise the same risk equation as a Prius or Highlander driver?
  2. The Mediation of the Road.  The current Toyota passenger car philosophy appears to be a closer cousin to Kitchen Aid than TF109.  This transportation appliance paradigm isolates the user (no longer a driver) from the grit, grime and smells of the road, substituting an ego coddling display of eco-righteousness and pretty maps.  How could the impolite fangs of risk driven adrenaline ever intrude into the quiet gentle rocking motions of hybrid power in a sarcophagus of LED illuminated soft plastics? The white knuckling pilot of the beater Pinto or the hyper vigilant  motorcyclist know no such peace. They know the road is a dangerous place, and that they are engaged in high risk behavior.  Unintended acceleration is one of myriad annihilation scenarios coursing ten thousand times a second through their oxygen deprived neurons.  Driving for them is like conducting transactions of the internet.   
Tangentially, yet incongruously, I once had a notion (but with a bit of backing...) that the ultimate design for a website used to conduct high dollar Internet transactions would be modeled after a mid-90s "adult" entertainment website – HTTP Auth pop-up, sloppy HotDog generated HTML, broken icon indicating missing plug-ins, probably registered at .biz, .info, .ru or .cx.  The customers would perceive the risk and exercise due caution, such as verifying the SSL certificate, maybe out-of-band telephone call to the institution, and routine changes of password for every session.  The site could be state of the art secure (y’know, SSL + firewall ), but the appearance of danger and perception of risk would make it Yet Still Even More So.   Of course, the crappiness would have to have a periodic refresh just to keep the users’ adrenaline up.

Toyota photo courtesy Wikimedia Commons.