On May 12th, 2008 VMware announced that nine storage replication vendors have tested and certified their technology with VMware’s long awaited Site Recovery Manager (SRM) offering. SRM is an important step forward in DR (DR) preparedness because it automates the process of restarting virtual machines (VM) at an alternate data center. Of course, your data and your VM configuration files must be present at the alternate site, hence the necessary integration with replication vendors. SRM not only automates the restart of VMs at an alternate data center, it can automate other aspects of DR. For example, it can shutdown other VMs before it recovers others. You can also integrate scripts for other tasks and insert checkpoints where a manual procedure is required. This is useful if you are using the redundant infrastructure at the alternate data center for other workloads such as application development and testing (a very common scenario). When you recover an application to an alternate site, especially if your redundant infrastructure supports other workloads, you have to think about how you will repurpose between secondary and production workloads. You also have to think about the entire ecosystem, such as network and storage settings, not just simply recovering a VM.
Essentially, VMware wants you to replace manual DR runbook with the automated recovery plans in SRM. It might not completely replace your DR runbook but it can automate enough of it. So much so that DR service providers such as SunGard are productizing new service offerings based on SRM.
Everyone wants to make their data centers more efficient and
gain recognition for their efforts but we’re lacking the benchmarks to shoot
for. Well, here’s your chance to help change that. On March 20th the U.S. Environmental
Protection Agency (EPA) kicked off a data collection process to help create Energy
Star™ ratings for data centers. Energy Star, the best known energy efficiency
identifier, is respected as a mark of credibility for products and services
that deliver superior energy efficiency. While mainly a consumer mark, the EPA
recently published a draft standard for servers,
its first serious foray into providing enterprise product and service guidance.
While extending Energy Star to your corporate data center, consumed only by
your own company, may not have customer impact, it has corporate brand value that
matters to the C-level executives. It will also have differentiating value when
choosing outsourced service providers.
Last week, I delivered a presentation about the recent report Web3D: The Next Major Internet Wave at the vBusiness Expo in Second Life. I'll share some of my experiences and observations, as I'm sure that during the coming year many of you will be invited to present at or attend virtual conferences and meetings -- if you haven't already. These tips may prove helpful.
Picture this. You, the application developer, are in a big conference room. On your left is your boss. On your right are enterprise architects. Across from you are the business analysts and project managers. In the hallway is the businessperson on his "crackberry". Why is everyone gathered here? To discuss the next important application development initiative that the business needs to drive revenue, stay competitive, and be more efficient.
The number of pure-play vendors in user account provisioning decreased on April 7, 2008 when Hitachi announced that it acquired M-Tech Information Technology, and changed the name to Hitachi ID. Although Hitachi has been lacking an identity and access management (IAM) pedigree, this move can prove important due to the following reasons: 1) Using IAM for provisioning of physical resources and hardware resources. 2) Extending enterprise role definitions to previously uncharted verticals and cultures. 3) Evangelizing user account provisioning and IAM in Japan and other APAC regions. 4) Hitachi becoming a major player in Japanese SOX (JSOX) implementation.
Needless to say, the above will hinge on Hitachi's ability to retain and grow the existing customer base of M-Tech IT in North America and Europe, and also on Hitachi's ability to compete against EMC's selling of Courion and RSA products. How Hitachi will create an access and adaptive access management (Web and desktop) portfolio to complement its identity management and provisioning portfolio also remains to be seen.
Overarching causes described in the report are not surprising; control failures, an overly aggressive focus on short-term growth, and excessive risk taking are among the high level issues addressed. Also in the report, however, are scores of more detailed explanations of control failures in more than 20 different categories. Specific problems on the list include:
On April 18th, IBM announced its intent to acquire virtual tape library (VTL) and deduplication vendor Diligent Technologies. For IBM, Diligent is a good fit. The company offers both mainframe and open systems virtual tape libraries and they are a pioneer of deduplication. However, IBM already offers a market leading mainframe VTL based on its own intellectual property and an open systems VTL based on FalconStor technology — although the open systems VTL has very limited adoption — so there is also a lot of overlap. Because Diligent is a software solution, IBM can quickly integrate Diligent with any of its storage systems and bring new VTLs to market relatively quickly. It’s very likely that IBM will in fact pursue this route so it can bring an inline deduplicating VTL to market as quickly as possible.
Ever since I was an investment banker at JPMorgan supporting their Software M&A team, I was predicting that the future of products and services in enterprise applications is inseparable. Significant portion of our team M&A advice to product vendors was to beef up their services portfolios and vice versa. These were my thoughts then, that are still very valid today:
CXO engagement. It's much easier to approach a C-level executive during a strategy initiative, which traditionally is the realm of strategic advisory and management consulting firms. The earlier you get your foot in the door with a CXO, the higher are the chances he/she will also consider your products. Hence, ability to influence downstream decisions for procuring products and services decreases in the latter phases of any initiative.
Successful execution. Strong PMO (Project Management Office) capabilities such as methodology, certifications, track record, etc and ultimately successful product/project delivery are key to application vendor success.
Service-oriented architecture (SOA). Large enterprise IT, convinced that no single off-the-shelf solution suite is ever good enough for them, are seriously considering component (services) based architectures, which is causing vendors to move into dynamic (or otherwise known as composite) apps middleware and services to prevent marginalization.
Today Google and Salesforce.com announced another step in their ongoing flirtatious relationship. Salesforce.com will now bundle Google business applications into its on-line CRM offering. Salesforce will also begin to distribute Google applications backed by Salesforce support. It's always interesting when these two make an announcement for two reasons: First, they are both 100% committed to cloud computing and they think about the future of the industry in very similar terms. Second, it is fundamentally interesting to conjecture about the potential of a Salesforce acquisition. Note the rumor mill cranking up on this topic a few weeks ago when Oracle arranged for a $2B line of credit.
Now, Marc Benioff has stated early, often and loudly that Saleforce.com is not an acquisition target and has every intention of becoming the next major software infrastructure vendor. Fair enough. Salesforce.com has done all the right things to do just that. They've invested heavily in an infrastructure and built a reputation that represents a significant barrier to entry to anyone that wants to horn in on their territory. Salesforce.com has a significant history of securely and reliably delivering mission critical enterprise applications in the cloud. Raise your hand if you can make that claim. Not a lot of hands.
On April 10, 2008, IBM announced its intent to acquire FilesX, a small startup that offers server-based replication and continuous data protection technology. The acquisition will become part of the Tivoli Storage Manager (TSM) family of products.
This acquisition will help IBM Tivoli fill a gap in their current portfolio of offerings for data protection. The vendor currently offers Tivoli Storage Manager (TSM), which is one of the leading enterprise-class backup software applications, and Tivoli Continuous Data Protection for Files, a product mostly used to protect PCs. In addition to traditional backup to tape or disk, TSM can also manage Microsoft Virtual Snapshots (VSS) and its own IBM storage-based snapshot technology in support of instant restore or snapshot assisted backup. But the company didn’t really have an offering for customers who wanted something that was better than backup but not as expensive as storage-based replication, this is where FilesX comes in. With FilesX, IBM can now address the recovery requirements of small enterprises that can’t afford storage-based replication. They can also meet the recovery requirements of large enterprises that want to protect more servers within their company with a more affordable replication offering as well as servers at the remote office.