Earlier this month, Microsoft disclosed details on Exchange Server 2010 Service Pack 1 (SP1) slated to ship later this year. Among the various fixes and improvements outlined in the announcement, Microsoft’s plans for archiving and eDiscovery enhancements caught my attention. Earlier this year, I wrote about Microsoft “dipping its toe” into these waters with the initial release of Exchange 2010, and am encouraged that the vendor is taking the market’s message archiving needs seriously with some promising, incremental steps.
Some of the key archiving message archiving advances planned for SP1 are:
Storage flexibility for a user’s Personal Archive. With the initial release of Exchange 2010, Personal Archives can only be stored in the same mailbox database as the original mailbox. SP1 will introduce the ability to provision a user's Personal Archive to a different mailbox database from their primary mailbox, supporting tiered storage options for archived mail.
Support for access to a user's Personal Archive with Outlook 2007. In the currently shipping version of Exchange 2010, organizations need either Outlook 2010 or OWA to view archived content. The planned additional client flexibility will be a plus for many, but keep an eye on timing since the announcement states that this will be available in the “SP1 timeframe” (allowing for some vendor wiggle room) and note that Personal Archive functionality in Exchange 2010 currently requires Enterprise CALs.
Last week I published two research reports on the hottest topic in PCI: Tokenization and Transaction Encryption. Part 1 was an introduction into the topic and Part 2 provided some action items for companies to consider during their evolution of these technologies. Respected security blogger, Martin McKeay, commented on Part 1. Serendipitously, Martin was also in Dallas (where I live) last week and we got an opportunity to chat in person about the report and other security topics.
Martin’s post highlighted several issues that deserve some response. He felt that I, “glossed over several important points people who are considering either technology need to be aware of.” Let me review those items:
Comment: “This is one form of tokenization, but it completely ignores another form of tokenization that’s been on the rise for several years; internal tokenization by the merchant with a (hopefully) highly secure database that acts as a central repository for the merchant’s cardholder data, while the remainder of the card flow stays the same as it is now.”
This is my first post as the new Research Director for the Security and Risk team here at Forrester. During my first quarter as RD, I spent a lot of time listening to our clients and working with the analysts and researchers on my team to create a research agenda for the rest of the year that will help our clients tackle their toughest challenges. It was a busy Q1 for the team. We hosted our Security Forum in London, fielded more than 443 end client inquiries, completed more than 18 research reports, and delivered numerous custom consulting engagements.
In the first quarter of 2010, clients were still struggling with the security ramifications of increased outsourcing, cloud computing, consumer devices and social networking. Trends have created a shift in data and device ownership that is usurping traditional IT control and eroding traditional security controls and protections.
We’re still dealing with this shift in 2010 — there’s no easy fix. This year there is a realization that the only way that the Security Organization can stay one step ahead of whatever business or technology shift happens next is to transform itself from a silo of technical expertise that is reactive and operationally focused to one that is focused on proactive information risk management. This requires a reexamination of the security program itself (strategy, policy, roles, skills, success metrics, etc.), its security processes, and its security architecture. In short, taking a step back and looking at the big picture before evaluating and deploying the next point protection product. Not surprisingly, our five most read docs since January 1, 2010 to today are having less to do with specific security technologies:
I was able to catch pieces of live testimony in front of the House Financial Services Committee yesterday on the Lehman Brothers collapse (covered via live blog by the Wall Street Journal). It was interesting to watch former Lehman head Richard Fuld reluctantly attempt to explain to an understandably skeptical audience, “We were risk averse,” in the period leading up to the company’s collapse.
Meanwhile, Goldman Sachs is back in the spotlight after the SEC leveled charges of fraud against the company last week related to alleged misstatements and omissions in the marketing of specific financial products. While this seems like a relatively small initial shot at the large financial firms, the SEC appears to be reasserting its authority after a series of embarrassing stories have come out about failures of oversight including Madoff, Stanford, and now Lehman.
So what does all this mean for governance, risk, and compliance professionals?
It’s hard to tell what might come of the fraud charges against Goldman Sachs, but if anything, this appears to build a case for more rigorous compliance policies and manual oversight. It’s hard to see how automated controls could have helped here, but the case could involve substantial e-discovery to determine how certain marketing decisions were made.
John Markoff’s article yesterday in The New York Times reveals that Google’s authentication system, code name "Gaia," was one of the targets of attack.
The target wasn’t Google users’ passwords, but the authentication system itself (Markoff refers to it as a “single sign-on” system; I’m reluctant to do that, since my own experience shows it to be a rather confusing mesh of both interconnected and disconnected authenticators… seems like Google could do a lot more to help users link and manage their IDs under one master account of their choosing). Why not the passwords? It’s far more valuable to gain access to the code and learn the intricacies – and weaknesses – of the system itself, rather than gain access to a few (or even a few thousand) accounts. My own theory is that this is why Adobe and various antimalware companies were targeted by the same network of attacks: the former, to find more weaknesses in Flash and Acrobat to exploit, and the latter, to learn how to bypass security mechanisms designed to defeat such attacks.
Markoff has several other excellent articles on the cyber attacks made public by Google in January, most notably this one.
One of my favorite jokes about security people is that you can divide them into two types: Builders and Breakers. Builders like to make things, like web applications or identity management infrastructures. Breakers like to find holes in things. They tinker and hack. Usually, you gravitate towards one skillset or the other; it is extremely rare to find someone who does both well. It’s like running: you either sprint, or run marathons.
So it was with great curiosity that I read about the announcement of the Qubes OS by Invisible Things’ Joanna Rutkowska. Joanna is best known as the bête noire of the virtualization world; her “Blue Pill” hypervisor-breaking software was widely noted, even by us. Her Black Hat speeches are legend. She is clearly in the Breaker camp, and one of the best ones too.
Qubes is a new operating system based on Linux and Xen that divides up the operating system into multiple isolated VMs that work together. It allows arbitrary portions of the operating system, such as the web browser, to run in one VM while other portions run in other VMs. Certain functions, like networking and storage, run in their own VMs. The VMs share a GUI (again, compartmentalized from the other VMs) and can exchange files. I won’t attempt to describe it in detail — the architecture document does that well enough:
Earlier this week SC Magazine published my comments on mobile malware: why I believe there will not be mobile malware pandemic any time soon, and probably not ever. My reply exceeded their length limit, so some of the context was lost. Here are my comments in their entirety.
Security software vendors like to bleat about how mobile phones will be the next big target for malware writers. There’s a sense of inevitability about this, and the story goes like this: Mobile operating systems are becoming a lot like PCs. PCs have lots of malware. Therefore smartphones will have lots of malware — any day now. Security vendors are hoping this will become true so they can sell mobile security software. This idea has at least three problems:
Even though the iPad is barely birthed, there is already a push to provide payment applications for the device. It's time to pull the emergency brake on this trend. Are these applications PA-DSS certified? Do they have swipe devices with crypto hardware built-in? Has the Pin Entry Device been rigorously tested and meet all the PIN Transaction Security Guidelines? There are so many things consumers should know about the security of these new methods of payments *before* they allow their credit card to be captured by an iPad or iPhone. Is the card's Personal Account Number (PAN) encrypted at the moment it is swiped by the device? Does the device establish an encrypted tunnel to transport the transaction to the payment gateway? Doe the iPad store the PAN? Is that storage encrypted or unencrypted? Does the processor support a tokenization scheme to keep the iPad out of PCI scope? Is the payment app the only thing running on the iPad?