Too many wearables today have screens that look like miniaturized smartphones.
Just as smartphones shouldn’t be PC screens shrunk down to a 4-5” screen, smartwatches shouldn’t look like smartphones shrunk to 1”. Nor is it a matter of responsive web design (RWD), which resizes web content to fit the screen.
Samsung's Gear 2 looks like a tiny smartphone screen.
Instead, it’s a different type of design philosophy – one with DNA in the mobile revolution, and then extending mobile thinking even further.
Let’s start with the concept of mobile moments. As my colleagues write in The Mobile Mind Shift, mobile moments are those points in time and space when someone pulls out a mobile device to get what he or she wants immediately, in context. In the case of wearables, the wearer often won’t need to pull out a device – it’s affixed to her wrist, clothing, or eyeglasses. But she might need to lift her wrist, as a visitor to Disney World must do with MagicBand.
Now we’re getting closer to what wearables should be. But there are additional dimensions to wearables that obviate the need for pixel-dense screens:
Wearables are opening up exciting new scenarios for consumers and enterprise users alike, but the wider conversation on wearables has taken a privacy-oriented turn. The New York Timesand WIRED, among others, have covered the emerging privacy concerns associated with wearable devices.
Particular ire has developed against Google Glass. An online activist group, Stop the Cyborgs, opposes Google Glass and related wearables, which the organization says will "normalize ubiquitous surveillance." Stop the Cyborgs offers downloads of anti-Glass graphics for posting in public places and online to spread the message that wearables are inherent privacy violators.
In a major new Forrester report, we present data and insights to help Infrastructure & Operations professionals who are piloting or planning to trial wearables navigate the privacy waters. As a teaser, here are some of our findings:
Today the European Commission fined Microsoft €561 million ($732 million) for failing to live up to a previous legal agreement. As the New York Times reported it, “the penalty Wednesday stemmed from an antitrust settlement in 2009 that called on Microsoft to give Windows users in Europe a choice of Web browsers, instead of pushing them to Microsoft’s Internet Explorer.” The original agreement stipulated that Microsoft would provide PC users a Browser Choice Screen (BCS) that would easily allow them to choose from a multitude of browsers.
Without commenting on the legalities involved (I’m not a lawyer), I think there are at least two interesting dimensions to this case. First, the transgression itself could have been avoided. Microsoft admitted this itself in a statement issued on July 17, 2012: “Due to a technical error, we missed delivering the BCS software to PCs that came with the service pack 1 update to Windows 7.” The company’s statement went on to say that “while we believed when we filed our most recent compliance report in December 2011 that we were distributing the BCS software to all relevant PCs as required, we learned recently that we’ve missed serving the BCS software to the roughly 28 million PCs running Windows 7 SP1.” Subsequently, today Microsoft took responsibility for the error. Clearly some execution issues around SP1 created a needless violation.
A key role of IT operations is to keep a complex portfolio of applications running and performing. "Traditional monitoring dashboards generate lots of pretty charts and graphs but don't really tell IT operations professionals a whole lot," says Forrester Principal Analyst Glenn O'Donnell. Big data analytics will change that because sophisticated algorithms can "look for the little tremors that tell us something big is about to happen."
High Availability And Performance Are Top Goals For IT Ops
Asked what 5 nines (99.999%) of availability means, Glenn replies immediately, "5 nines of availability is 26 seconds of downtime per month." He adds "If you want to capture just one 26 second event, you have to be polling every 13 seconds." Glenn knows his stuff. Listen to find out from Glenn how big data has a big place in the future of IT operations.
The Dell brand is one of the most recognizable in technology. It was born a hardware company in 1984 and deservedly rocketed to fame, but it has always been about the hardware. In 2009, its big Perot Systems acquisition marked the first real departure from this hardware heritage. While it made numerous software acquisitions, including some good ones like Scalent, Boomi, and KACE, it remains a marginal player in software. That is about to change.
About five months ago, I “broke up” with T-Mobile in favor of AT&T. I was a T-Mobile customer for six years on a very competitive service plan. But none of that mattered; I wanted an iPhone, and T-Mobile couldn’t give it to me. It was a clean but cruel breakup: AT&T cancelled my T-Mobile contract on my behalf, the equivalent of getting dumped by your girlfriend’s new boyfriend.
I bring this up because it reminds me of the saying: “If we don’t take care of our customers, someone else will.” This is particularly important to remember in “The Age Of The Customer” where technology-led disruption is eroding traditional competitive barriers across all industries. Empowered buyers have information at their fingertips to check a price, read a product review, or ask for advice from a friend right from the screen of their smartphone.
This is affecting your IT just as much as your business: As an indicator, Forrester finds that 48% of information workers already buy whatever smartphone they want and use it for work purposes. In the new era, it is easier than ever for empowered employees and App Developers to circumvent traditional IT procurement and provisioning to take advantage of new desktop, mobile, and tablet devices as well as cloud-based software and infrastructure you don’t support. They’re “cheating” on you to get their jobs done better, faster, and cheaper.
To become more desirable to your customer – be it your Application Developers, workforce, or end buyers – IT Infrastructure and Operations leaders must become more customer-obsessed, which I talk about in this video:
After considerable speculation and anticipation, VMware has finally announced vSphere 5 as part of a major cloud infrastructure launch, including vCloud Director 1.5, SRM 5 and vShield 5. From our first impressions, it is both well worth the wait and merits immediate serious consideration as an enterprise virtualization platform, particularly for existing VMware customers.
The list of features is voluminous, with at least 100 improvements, large and small, but among the features, several stand out as particularly significant as I&O professionals continue their efforts to virtualize the data center, primarily dealing with and support for both larger VMs and physical host systems, and also with the improved manageability of storage and improvements Site Recovery Manager (SRM), the remote-site HA components:
Replication improvements for Site Recovery Manager, allowing replication without SANs
Distributed Resource Scheduling (DRS) for Storage
Support for up to 1 TB of memory per VM
Support for 32 vCPUs per VM
Support for up to 160 Logical CPUs and 2 TB or RAM
New GUI to configure multicore vCPUs
Storage driven storage delivery based on the VMware-Aware Storage APIs
Improved version of the Cluster File System, VMFS5
Storage APIs – Array Integration: Thin Provisioning enabling reclaiming blocks of a thin provisioned LUN on the array when a virtual disk is deleted
Swap to SSD
2TB+ LUN support
Storage vMotion snapshot support
vNetwork Distributed Switch improvements providing improved visibility in VM traffic
vCenter Server Appliance
vCenter Solutions Manager, providing a consistent interface to configure and monitor vCenter-integrated solutions developed by VMware and third parties
Revamped VMware High Availability (HA) with Fault Domain Manager
Entering into a new competitive segment, especially one dominated by major players with well-staked out turf, requires a level of hyperbole, dramatic positioning and a differentiable product. Cisco has certainly achieved all this and more in the first two years of shipment of its UCS product, and shows no signs of fatigue to date.
However, Cisco’s announcement this week that it is now part of Microsoft’s Fast Track Data Warehouse and Fast Track OLTP program is a sign that UCS is also entering the mainstream of enterprise technology. The Microsoft Fast Track program, offering a set of reference architectures, system specification and sizing guides for both common usage scenarios for Microsoft SQL Server, is not new, nor is it in any way unique to Cisco. Fast Track includes Dell, HP, IBM, and Bull. The fact that Cisco will now get equal billing from Microsoft in this program is significant – it is the beginning of the transition from emerging fringe to mainstream , and an endorsement to anyone in the infrastructure business that Cisco is now appearing on the same stage as the major incumbents.
Will this represent a breakthrough revenue opportunity for Cisco? Probably not, since Microsoft will be careful not to play favorites and will certainly not risk alienating its major systems partners, but Cisco’s inclusion on this list is another incremental step in becoming a mainstream server supplier. Like the chicken soup that my grandmother used to offer, it can’t hurt.
I met recently with Cisco’s UCS group in San Jose to get a quick update on sales and maybe some hints about future development. The overall picture is one of rapid growth decoupled from whatever pressures Cisco management has cautioned about in other areas of the business.
Overall, according to recent disclosure by Cisco CEO John Chambers, Cisco’s UCS revenue is growing at a 550% Y/Y growth rate, with the most recent quarterly revenues indicating a $500M run rate (we make that out as about $125M quarterly revenue). This figure does not seem to include the over 4,000 blades used by Cisco IT, nor does it include units being consumed internally by Cisco and subsequently shipped to customers as part of appliances or other Cisco products. Also of note is the fact that it is fiscal Q1 for Cisco, traditionally its weakest quarter, although with an annual growth rate in excess of 500% we would expect that UCS sequential quarters will be marching to a totally different drummer than the overall company numbers.
In a recent discussion with a group of infrastructure architects, power architecture, especially UPS engineering, was on the table as a topic. There was general agreement that UPS systems are a necessary evil, cumbersome and expensive beasts to put into a DC, and a lot of speculation on alternatives. There was general consensus that the goal was to develop a solution that would be more granular install and deploy and thus allow easier and ad-hoc decisions about which resources to protect, and agreement that battery technologies and current UPS architectures were not optimal for this kind of solution.
So what if someone were to suddenly expand battery technology R&D investment by a factor of maybe 100x of R&D and into battery technology, expand high-capacity battery production by a giant factor, and drive prices down precipitously? That’s a tall order for today’s UPS industry, but it’s happening now courtesy of the auto industry and the anticipated wave of plug-in hybrid cars. While batteries for cars and batteries for computers certainly have their differences in terms of depth and frequency of charge/discharge cycles, packaging, lifespan, etc, there is little doubt that investments in dense and powerful automotive batteries and power management technology will bleed through into the data center. Throw in recent developments in high-charge capacitors (referred to in the media as “super capacitors”), which add the impedance match between the requirements for spike demands and a chemical battery’s dislike of sudden state changes, and you have all the foundational ingredients for major transformation in the way we think about supplying backup power to our data center components.