Elephants, Pigs, Rhinos and Giraphs; Oh My! – It's Time To Get A Handle On Hadoop

Brian  Hopkins

By now you have at least seen the cute little elephant logo or you may have spent serious time with the basic components of Hadoop like HDFS, MapReduce, Hive, Pig and most recently YARN. But do you have a handle on Kafka, Rhino, Sentry, Impala, Oozie, Spark, Storm, Tez… Giraph? Do you need a Zookeeper? Apache has one of those too! For example, the latest version of Hortonworks Data Platform has over 20 Apache packages and reflects the chaos of the open source ecosystem. Cloudera, MapR, Pivotal, Microsoft and IBM all have their own products and open source additions while supporting various combinations of the Apache projects.

After hearing the confusion between Spark and Hadoop one too many times, I was inspired to write a report, The Hadoop Ecosystem Overview, Q4 2104. For those that have day jobs that don’t include constantly tracking Hadoop evolution, I dove in and worked with Hadoop vendors and trusted consultants to create a framework. We divided the complex Hadoop ecosystem into a core set of tools that all work closely with data stored in Hadoop File System and extended group of components that leverage but do not require it.

In the past, enterprise architects could afford to think big picture and that meant treating Hadoop as a single package of tools. Not any more – you need to understand the details to keep up in the age of the customer. Use our framework to help, but please read the report if you can as I include a lot more there.

Read more

Lost In Data Translation? Forrester's Data Taxonomy To The Rescue

Boris Evelson
  • When it comes to data technology, are you lost in translation? What's the difference between data federation, virtualization, and data or information-as-a-service? Are columnar databases also relational? Does one use the same or different tools for BAM (Business Activity Monitoring) and for CEP (Complex Event Processing)? These questions are just the tip of the iceberg of a plethora of terms and definitions in the rich and complex world of enterprise data and information. Enterprise application developers, data, and information architects manage multiple challenges on a daily basis already, and the last thing they need to deal with are misunderstandings of the various data technology component definitions.
Read more

The Data Digest: The Evolution Of Consumer Attitudes On Privacy

Anjali Lai

With Fatemeh Khatibloo

The tide is turning on privacy. Since the earliest days of the World Wide Web, there has been an increasing sense that the Internet would effectively kill privacy – and in the wake of the NSA PRISM program revelations, that sentiment was stronger than ever. However, by using our Forrester’s Technographics 360 methodology, which blends multiple qualitative and quantitative data sources, we found that attitudes on privacy are evolving: Consumers are beginning to shift from a state of apathy and resignation to caution and empowerment.

In our recently published report, we integrate Forrester's Consumer Technographics® survey data, ConsumerVoices Market Research Online Community qualitative insight, and social listening data to provide a holistic view of the changes in consumer perceptions and expectations of data privacy. In the past year, individuals have 1) become much more aware about the ways in which organizations collect, use, and share personal data and 2) have started to change their online behavior in response: 

Read more

How Will The Data Economy Impact Enterprise Architects?

Gene Leganza
No self-respecting EA professional would enter into planning discussions with business or tech management execs without a solid grasp of the technologies available to the enterprise, right? But what about the data available to the enterprise? Given the shift towards data-driven decision-making and the clear advantages from advanced analytics capabilities, architecture professionals should be coming to the planning table with not only an understanding of enterprise data, but a working knowledge of the available third-party data that could have significant impact on your approach to customer engagement or your B2B partner strategy.
 
 
Data discussions can't be simply about internal information flow, master data, and business glossaries any more. Enterprise architects, business architects, and information architects working with business execs on tech-enabled strategies need to bring third-party data know-how to their brainstorming and planning discussions. As the data economy is still in its relatively early stages and, more to the point, as organizational responsibilities for sourcing, managing, and governing third-party data are still in their formative states, it behooves architects to take the lead in understanding the data economy in some detail. By doing so, architects can help their organizations find innovative approaches to data and analytics that have direct business impact by improving the customer experience, making your partner ecosystem more effective, or finding new revenue from data-driven products.
 
Read more

Welcome To The Future Of Data Management

Michele Goetz

 The demand for data has never been greater.  The expectations are even grander.  On the other hand, what the business wants has never been more ambiguous.  

Welcome to the future of data management.  

According to recent Forrester research, most of us are ill prepared.

  • The business is placing the ownership on data professionals for data needs they don't have the full knowledge to enable: security, quality, business intelligence, and data strategy. 
  • Pressure to contain cost causes data professionals to focus on bottom line efficiency goals and de-emphasize top line business growth goals.
  • Investment in data  is still grounded in bespoke systems that lack scale, flexibility, and agility
Read more

Categories:

Data Governance: Did We Make The Right Choices?

Michele Goetz

Coming back from the SAS Industry Analyst Event left me with one big question - Are we taking into account the recommendations or insights provided through analysis and see if they actually produced positive or negative results?

It's a big question for data governance that I'm not hearing discussed around the table.  We often emphsize how data is supplied, but how it performs in it's consumed state is fogotten.  

When leading business intelligence and analytics teams I always pushed to create reports and analysis that ultimately incented action.  What you know should influence behavior and decisions, even if the influence was to say, "Don't change, keep up the good work!"  This should be a fundamental function of data govenance.  We need to care not only that the data is in the right form factor but also review what the data tells us/or how we interpret the data and did it make us better?

I've talked about the closed-loop from a master data management perspective - what you learn about customers will alter and enrich the customer master.  The connection to data governance is pretty clear in this case.  However, we shouldn't stop at raw data and master definitions.  Our attention needs to include the data business users receive and if it is trusted and accurate.  This goes back to the fact that how the business defines data is more than what exists in a database or application.  Data is a total, a percentage, an index.  This derived data is what the business expects to govern - and if derived data isn't supporting business objectives, that has to be incorporated into the data governance discussion.

Read more

The Seven Deadly Sins of Data Management Investment and Planning

Michele Goetz

When it comes to data investment, data management is still asking the wrong questions and positioning the wrong value.  The mantra of - It's About the Business - is still a hard lesson to learn.  It translates into what I see as the 7 Deadly Sins of Data Management.  Here are the are - not in any particular order - and an example:

  1. Hubris: "Business value? Yeah, I know.  Tell me something I don't know."  
  2. Blindness: "We do align to business needs.  See, we are building a customer master for a 360 degree view of the customer." 
  3. Vanity: "How can I optimize cost and efficiency to manage and develop data solutions?"
  4. Gluttony: "If I build this cool solutions the business is gonna love it!"
  5. Alien: "We need to develop an in-memory system to virtualize data and insight that materializes through business services with our application systems...[blah, blah, blah]"
  6. Begger: "If only we were able to implement a business glossary, all our consistency issues are solved!"
  7. Educator: "If only the business understood!  I need to better educate them!."
Read more

5 Reasons Hadoop Is Kicking Can And Taking Names

Mike Gualtieri

Hadoop’s momentum is unstoppable as its open source roots grow wildly into enterprises. Its refreshingly unique approach to data management is transforming how companies store, process, analyze, and share big data. Forrester believes that Hadoop will become must-have infrastructure for large enterprises. If you have lots of data, there is a sweet spot for Hadoop in your organization.  Here are five reasons firms should adopt Hadoop today:

  1. Build a data lake with the Hadoop file system (HDFS). Firms leave potentially valuable data on the cutting-room floor. A core component of Hadoop is its distributed file system, which can store huge files and many files to scale linearly across three, 10, or 1,000 commodity nodes. Firms can use Hadoop data lakes to break down data silos across the enterprise and commingle data from CRM, ERP, clickstreams, system logs, mobile GPS, and just about any other structured or unstructured data that might contain previously undiscovered insights. Why limit yourself to wading in multiple kiddie pools when you can dive for treasure chests at the bottom of the data lake?
  2. Enjoy cheap, quick processing with MapReduce. You’ve poured all of your data into the lake — now you have to process it. Hadoop MapReduce is a distributed data processing framework that brings the processing to the data in a highly parallel fashion to process and analyze data. Instead of serially reading data from files, MapReduce pushes the processing out to the individual Hadoop nodes where the data resides. The result: Large amounts of data can be processed in parallel in minutes or hours rather than in days. Now you know why Hadoop’s origins stem from monstrous data processing use cases at Google and Yahoo.
Read more

Information Fabric 3.0 Delivers The Next Generation Of Data Virtualization

Noel Yuhanna

For decades, firms have deployed applications and BI on independent databases and warehouses, supporting custom data models, scalability, and performance while speeding delivery. It’s become a nightmare to try to integrate the proliferation of data across these sources in order to deliver the unified view of business data required to support new business applications, analytics, and real-time insights. The explosion of new sources, driven by the triple-threat trends of mobile, social, and the cloud, amplified by partner data, market feeds, and machine-generated data, further aggravates the problem. Poorly integrated business data often leads to poor business decisions, reduces customer satisfaction and competitive advantage, and slows product innovation — ultimately limiting revenue.

Forrester’s latest research reveals how leading firms are coping with this explosion using data virtualization, leading us to release a major new version of our reference architecture, Information Fabric 3.0. Since Forrester invented the category of data virtualization eight years ago with the first version of information fabric, these solutions have continued to evolve. In this update, we reflect new business requirements and new technology options including big data, cloud, mobile, distributed in-memory caching, and dynamic services. Use information fabric 3.0 to inform and guide your data virtualization and integration strategy, especially where you require real-time data sharing, complex business transactions, more self-service access to data, integration of all types of data, and increased support for analytics and predictive analytics.

Information fabric 3.0 reflects significant innovation in data virtualization solutions, including:

Read more

Data Science And "Closed-Loop" Analytics Changes Master Data Strategy

Michele Goetz
I had a conversation recently with Brian Lent, founder, chairman, and CTO of Medio. If you don’t know Brian, he has worked with companies such as Google and Amazon to build and hone their algorithms and is currently taking predictive analytics to mobile engagement. The perspective he brings as a data scientist not only has ramifications for big data analytics, but drastically shifts the paradigm for how we architect our master data and ensure quality.
 
We discussed big data analytics in the context of behavior and engagement. Think shopping carts and search. At the core, analytics is about the “closed loop.” It is, as Brian says, a rinse and repeat cycle. You gain insight for relevant engagement with a customer, you engage, then you take the results of that engagement and put them back into the analysis.
 
Sounds simple, but think about what that means for data management. Brian provided two principles:
  • Context is more important than source.
  • You need to know the customer.
Read more