This week, Amazon announced a new storage offering within Amazon Web Services (AWS) called Glacier. Aptly named, it’s intended to be vast and slow moving, with a cheap price tag to match. At a fraction of the cost of the storage intended for online storage offerings EBS and S3, Glacier will cost you $.01 per GB per month, compared to around $.05- $.13 per GB per month for higher performance offerings depending on capacity stored. Restores from Glacier are costly by design; this is intended for data that you’re not likely to access frequently. If used for the right types of data, this will be a low cost way to park stale data for long periods of time.
Analyzing the cost implications, it would cost you all of $120 to store a TB of data for a year, provided you don’t have to access it during that time. Ten years would cost you $1,200, and 100 years would cost you $12,000. Sure there would be upcharges if and when you access the data, but the value of being able to get back the data you need within a few hours, years after you archived it is tremendous. The data reliability guarantee is 11 9’s— meaning that for each piece of data, Amazon guarantees that it will be there 99.999999999% of the time, included in the base cost to archive it, which is about as close to certainty as you can get in any contract.
What data do you trust? Increasingly, business stakeholders and data scientists trust the information hidden in the bowels of big data. Yet, how data is mined mostly circumvents existing data governance and data architecture due to speed of insight required and support data discovery over repeatable reporting.
The key to this challenge is a data quality reboot: rethink what matters, and rethink data governance.
Part 1 of our Data Quality Reboot Series is to rethink master data management (MDM) in a big data world.
Current thinking: Master data as a single data entity. A common theme I hear from clients is that master data is about the linked data elements for a single record. No duplication or variation exists to drive consistency and uniqueness. Master data in the current thinking represents a defined, named entity (customer, supplier, product, etc.). This is a very static view of master data and does not account for the various dimensions required for what is important within a particular use case. We typically see this approach tied tightly to an application (customer resource management, enterprise resource management) for a particular business unit (marketing, finance, product management, etc.). It may have been the entry point for MDM initiatives, and it allowed for smaller scope tangible wins. But, it is difficult to expand that master data to other processes, analysis, and distribution points. Master data as a static entity only takes you so far, regardless of whether big data is incorporated into the discussion or not.
In May I wrote about Infosys’ visa woes. Yesterday, an Alabama judge ruled in favor of Infosys in the first of the visa-related whistleblower lawsuits. It is important to note that this lawsuit was not about whether or not Infosys violated any visa laws, it was about whether Infosys retaliated against the plaintiff, Jay Palmer, for reporting visa misuse to executives at Infosys. The judge, Myron H. Thompson, allowed that although Palmer claimed he was mistreated and abused when he filed the internal whistleblower claim, because Palmer was an at-will employee, he, under Alabama law, has very few employee rights. In his decision, the judge referred to an Alabama Supreme Court decision that found, "Absent a contract providing otherwise, an employee may be demoted, denied a promotion, or otherwise adversely treated for any reason, good or bad, or even for no reason at all."
He went on to say, "Without question, the alleged electronic and telephonic threats are deeply troubling. Indeed, an argument could be made that such threats against whistleblowers, in particular, should be illegal. The issue before the court, however, is not whether Alabama should make these alleged wrongs actionable, but whether they are, in fact, illegal under state law. This court cannot rewrite state law."
Many of the marque open data programs are in the big cities. Think New York City and its NY Big Apps Contests, or Chicago or London or Barcelona or Rio de Janeiro. But smaller cities are also sitting on public data. They receive requests for information from their constituents, and their constituents expect new applications and services. Not to mention the fact that cities of all sizes are responding to pressure for greater openness and transparency. These are a few of the reasons why the City of Palo Alto recently launched its open data community site. According to Jonathan Reichtenthal, the CIO of Palo Alto, “It is more common that information is public than not.” And, therefore, why not make it easier for citizens to access?
Palo Alto – with a population of about 65,000, located between San Francisco and San Jose, California, and known as the home of Stanford University and the “birthplace of the Silicon Valley” – was a prosperous, tech-savvy city. But from an IT perspective, the city administration had been working in the past… until about eight months ago when a new CIO came on board. Jonathan Reichtenthal is the “first cabinet-level CIO” of Palo Alto. IT had historically been an administrative division housed with legal, HR, and finance. When the previous head of IT retired, the city manager decided to elevate the status of IT and drive more strategic use of technology within the organization. One of the first initiatives launched by Reichtenthal was open data.
Unfortunately I don’t often hear “strategy” and “IT service management (ITSM)” in the same sentence, unless of course someone is maligning the ITIL 2011 Service Strategy book or if an organization is justifying a significant investment in a new ITSM tool (to me this is too often the breeding ground for failed aspirations). Alternatively we often talk about (and are consumed by) tactical ITSM issues and our tactical responses. So where and what is your ITSM strategy? And where is your ITSM strategic plan?
If you have answers to these questions you probably don’t need to read this blog so feel free to choose another. If you don’t, don’t you think you should? I’ve stolen some written-word from my colleague Jean-Pierre Garbani to get you thinking.
What’s your strategy for ITSM strategy?
I’m not going to answer this – I just thought it a funny question. Better starter questions are probably: “What do I mean by strategy?” and “What is strategic planning?”
I can’t help but use the ever-useful Wikipedia for the first:
Are you or someone you know a Mac lover but brutally forced to use a PC at work? Don't fret or give up yet. Many firms such as Genentech are saying "no" to PCs and "yes" to Macs. And other firms are instituting BYOC (bring your own computer) programs that allow Mac followers to worship at work. Is this a trend that has legs, or have we entered the post-PC era where it doesn't really matter what hunk of hardware employees use?
Macs have less than a 10% share in enterprises. But, Senior Analyst and Forrester's resident Mac-whisperer Dave Johnson says that is changing and changing fast as a result of increasing BYOC programs and smaller firms that standardize on Macs.
Listen to Dave's authoritative, balanced analysis in this episode of TechnoPolitics to find out if Macs can make it in the enterprise.
I reported that the managed security services market is growing in our recent Forrester Wave™ covering North American managed security service providers. Trustwave just issued a press release that announced 148% sales growth. This is a significant number in anyone’s book. It does point to the increased growth we are seeing as more and more firms consider and adopt managed services to handle some or all of their security requirements.
I receive a lot of inquiries from clients about an EA maturity/assessment model. It’s proven to be a common and excellent way to track EA’s progress and influence plans — so common that we dedicated an entire report to it in our EA Practice Playbook, and we have an upcoming webinar for enterprise architects who want to build/customize their own model. The usual backstory is that an EA leader wants (or has been asked) to create a model from scratch or customize an external model to fit the organization. It’s usually about a 50/50 split between those options.
And what starts as a simple over-the-weekend project quickly becomes a frustrating struggle. The criteria pile up quickly — after all, EA does a lot of things. The granularity is inconsistent — one can measure a piece of a process or the larger process it belongs to. The scoring scale causes frustration — it can score many aspects of your criteria — and is either vague or specific. And when compared to other models, it inevitably looks vastly different from each one. It isn’t long before other day-to-day priorities put the effort on the back burner.
As one who has gone through the exercise a few times, I’ve got five tips that can help you move along faster and complete your model before other priorities swallow it up:
I recently went for coffee with a very interesting gentleman who had previously been responsible for threat and vulnerability management in a global bank – our conversation roamed far and wide but kept on circling back to one or two core messages – the real fundamental principles of information security. One of these principles was “know your assets.”
Asset management is something that many CISO tend to skip over, often in the belief that information assets are managed by the business owners and hardware assets are closely managed by IT. Unfortunately, I’m not convinced that either of these beliefs is true to any great extent.
Take, for example, Anonymous’ recent hack of a forgotten VM server within AAPT’s outsourced infrastructure. VM "sprawl" is one of the key risks that Forrester discusses, and this appears to be a classic example – a virtual server created in haste and soon forgotten about. Commonly, as these devices fall off asset lists, they get neglected – malware and patching updates are skipped and backups are overlooked – yet they still exist on the network. It’s the perfect place for an attacker to sit unnoticed and, if the device exists in a hosted environment, it can also have the negative economic impact of monthly cost and license fees. One anecdote I heard was of a system administrator who, very cautiously and very successfully, disabled around 200 orphaned virtual servers in his organisation – with no negative business impact whatsoever.
The recent news of a golfing glove utilizing sensors and apps to allow golfers the opportunity to measure the trajectory and speed of their swing, is reflective of the increasing range of smart products on the market. It also serves to illustrate the new challenges being faced by product development teams where rapid technology change and ever more demanding consumers are transforming even traditional product categories.