Third-party Database Tools Still Matter

Noel Yuhanna

Over the past year, I have received numerous inquiries asking me whether third-party database tools that focus on performance and tuning, backup recovery, replication, upgrade, troubleshooting, and migration capabilities matter anymore now that leading DBMS providers such as Oracle, IBM, and Microsoft are offering improved automation and broader coverage. 

I find that third-party tools complement well with native database tools in assisting DBAs, developers and operational staff in their day-to-day activities. Last year, I had the opportunity to speak to dozens of enterprises that support hundreds and thousands of databases across various DBMSes. Most enterprises reported they saw at least a 20 percent IT staff productivity when using a third-party database tool.

Third-party vendor tools remain equally important because they support:

Read more

Future App Servers -- Radically Different

John R. Rymer

I was lucky enough last week [22 March 2010] to moderate a panel at EclipseCon on the future of application servers. The panelists did a great job, but I thought were far too conservative in their views. I agree with them that many customers want evolutionary change from today to future app servers, but I see requirements driving app servers toward radical change. Inevitably.

The changes I see:

 

Requirement

Response

Get more value from servers, get responsive, get agile and flexible

Virtualized everything, dynamic provisioning, automated change management

Govern rising application stack complexity

Lean, fit to purpose app servers, profiles and other standard configurations, modeling and metadata-based development and deployment

Provide “Internet scale”

Scale-out app servers, data tiers, network capacity, modular/layered designs, stateless architectures

 

Read more

Natural user interfaces - notes from the field

Jeffrey Hammond

Last week I was once again hustling through a brutal travel week (10,000 miles in the air and two packed red-eyes) when I came across something really interesting. It was ~ 9 AM and I'd just gotten off AA flight 4389 from Toronto. I was a bit bleary eyed from a 4 AM call with a Finnish customer and was just trying to schlep my way to the Admiral's club for a cup of coffee when I stumbled across Accenture's Interactive Network display at the juncture of terminal H and K.

 

THis is a picture of a screen for the Accenture Interactive Network, at American's terminal at O'Hare

 

So what? You might ask, it's just a big screen and we already know our future is minority report -right? Yes - those of us in the echo chamber might know that, but what really struck me was watching my fellow travelers and how they interacted with the display. I sat and watched for about 10 minutes (while forgetting about the sorely needed cuppa joe) and just watched people as they started to walk past, then pause, then go up to the screen and start playing with it. On average folks would stay for a few minutes and read some of the latest news feeds, then hurry on to their next stop. But what I really found intriguing was how they interacted with the system:

 

Read more

I forget: what's in-memory?

Boris Evelson

By Boris Evelson
 

In-memory analytics are all abuzz for multiple reasons. Speed of querying, reporting and analysis is just one. Flexibility, agility, rapid prototyping is another. While there are many more reasons, not all in-memory approaches are created equal. Let’s look at the 5 options buyers have today:
 

1. In-memory OLAP. Classic MOLAP cube loaded entirely in memory

Vendors: IBM Cognos TM1, Actuate BIRT
Pros

  • Fast reporting, querying and analysts since the entire model and data are all in memory.
  • Ability to write back.
  • Accessible by 3rd party MDX tools (IBM Cognos TM1 specifically)

Cons

  • Requires traditional multidimensional data modeling.
  • Limited to single physical memory space (theoretical limit of 3Tb, but we haven’t seen production implementations of more than 300Gb – this applies to the other in-memory solutions as well)

 

2. In-memory ROLAP. ROLAP metadata loaded entirely in memory.

Vendors: MicroStrategy
Pros

  • Speeds up reporting, querying and analysis since metadata is all in memory.
  • Not limited by physical memory

Cons

  • Only metadata, not entire data model is in memory, although MicroStrategy can build complete cubes from the subset of data held entirely in memory
  • Requires traditional multidimensional data modeling.

 

3. In memory inverted index. Index (with data) loaded into memory

Vendors: SAP BusinessObjects (BI Accelerator), Endeca

Pros

  • Fast reporting, querying and analysts since the entire index is in memory
  • Less modelling required than an OLAP based solution
Read more

Asset Virtualization – When Avatars Are Field Engineers

Holger Kisker

Smoke and fire is all around you, the sound of the alarm makes you dizzy and people are running in panic to escape the inferno while you have to find your way to safety. This is not a scene in the latest video game but actually training for e.g. field engineers in an exact virtual copy of a real world environment such as oil platforms or manufacturing plants.

In a recent discussion with VRcontext, a company based in Brussels and specialized since 10 years in asset virtualization, I was fascinated by the possibilities to create virtual copies of real world large, extremely complex assets simply from scanning existing CAD plans or on-site laser scans. It’s not just the 3D virtualization but the integration of the virtual world with Enterprise Asset Management (EAM), ERP, LIMS, P&ID and other systems that allows users to track, identify and locate every single piece of equipment in the real and virtual world.

These solutions are used today for safety training simulations as well as to increase operational efficiency e.g. in asset maintenance processes. There are still areas for further improvements, like the integration of RFID tags or sensor readings. However, as the technology further matures I can see future use cases all over the place – from the virtualization of any kind of location that is difficult or dangerous to enter to simple office buildings for a ‘company campus tour’ or a ‘virtual meeting’. And it doesn’t require super-computing power – it all runs on low-spec, ‘standard’ PCs and the models are only taking few GBytes storage.

So if you are bored of running around in Second Life or World Of Warcraft, if you ever have the chance, exchange your virtual sword for a wrench and visit the ‘real’ virtual world of a fascinating oil rig or refinery.

Please leave a comment or contact me directly.

Kind regards,

Holger Kisker

The Battle Of Partner Eco-Systems

Holger Kisker

On the need to analyze, compare and rate partner eco-systems – please vote.

The world is becoming more and more complex and so are the business challenges and their related IT solutions. Today no single vendor can provide complete end-to-end solutions from physical assets to business process optimization. Some large vendors like IBM, Oracle or HP, have extended their solution footprint to cover more and more of the four IT core markets hardware, middleware software, business applications and services but still require complementary partner solutions to cover end-to-end processes. Two examples of emerging complex IT solutions include:

  • Smart Computing integrates the physical world with business process optimization via four steps: Awareness (sensors, tags etc.), Analysis (analytic solutions), Alternatives (business applications with decision support) and Action (feedback loop into the physical world). A few specialized vendors such as Savi Technology can cover the whole portfolio from sensors to business applications for selected scenarios. However, in general a complete solution requires many partners working closely together to enable an end-to-end process.
  • Cloud Computing includes different IT resources (typically infrastructure, middleware and applications) which are offered in pay-by-use, self-service models via the internet. The seamless consumption of these resources for the end user anytime and anywhere however requires multiple technologies, processes and a challenging governance model often with many different stakeholder involved, behind the scene.
Read more

Three Top Questions To Ask a BI Vendor

Boris Evelson

By Boris Evelson

 

An editor from a leading IT magazine asked me this question just now, so I thought I'd also blog about it. Here it goes:

 

Q1: What are the capabilities of your services organization to help clients not just with implementing your BI tool, but with their overall BI strategy.

 

The reason I ask this as a top question, is that most BI vendors these days have modern, scalable, function rich, robust BI tools. So a real challenge today is not with the tools, but with governance, integration, support, organizational structures, processes, etc – something that only experienced consultants can help with.
 
Q2:  Do you provide all components necessary for an end to end BI environment (data integration, data cleansing, data warehousing, performance management, portals, etc in addition to reports, queries, OLAP and dashboards)?
 
If a vendor does not you'll have to integrate these components from multiple vendors.
 
Read more

Number of people using BI

Boris Evelson

By Boris Evelson

 

A number of clients ask me "how many people do you think use BI". Not an easy question to answer, will not be an exact science, and will have many caveats. But here we go:

 

  1. First, let's assume that we are only talking about what we all consider "traditional BI" apps. Let's exclude home grown apps built using spreadsheets and desktop databases. Let's also exclude operational reporting apps that are embedded in ERP, CRM and other applications.
  2. Then, let's cut out everyone who only gets the results of a BI report/analysis in a static form, such as a hardcopy or a non interactive PDF file. So if you're not creating, modifying, viewing via a portal, sorting, filtering, ranking, drilling, etc, you probably do not require a BI product license and I am not counting you.
  3. I'll just attempt to do this for the US for now. If the approach works, we'll try it for other major regions and countries.
  4. Number of businesses with over 100 employees (a reasonable cut off for a business size that would consider using what we define as traditional BI) in the US in 2004 was 107,119
  5. US Dept of Labor provides ranges as in "firms with 500-749 employees". For each range I take a middle number. For the last range "firms with over 10,000" I use an average of 15,000 employees.
  6. This gives us 66 million (66,595,553) workers employed by US firms who could potentially use BI
  7. Next we take the data from our latest BDS numbers on BI which tell us that 54% of the firms are using BI which gives us 35 million (35,961,598) workers employed by US firms that use BI
Read more

Elastic Caching Platforms Balance Performance, Scalability, And Fault Tolerance

Mike Gualtieri

Fast Access To Data Is The Primary Purpose Of Caching

Developers have always used data caching to improve application performance. (CPU registers are data caches!) The closer the data is to the application code, the faster the application will run because you avoid the access latency caused by disk and/or network. Local caching is fastest because you cache the data in the same memory as the code itself. Need to render a drop-down list faster? Read the list from the database once, and then cache it in a Java HashMap. Need to avoid the performance-sapping disk trashing of an SQL call to repeatedly render a personalized user’s Web page? Cache the user profile and the rendered page fragments in the user session.

Although local caching is fine for Web applications that run on one or two application servers, it is insufficient if any or all of the following conditions apply:

  • The data is too big to fit in the application server memory space.
  • Cached data is updated and shared by users across multiple application servers.
  • User requests, and therefore user sessions, are not bound to a particular application server.
  • Failover is required without data loss.

To overcome these scaling challenges, application architects often give up on caching and instead turn to the clustering features provided by relational database management systems (RDBMSes). The problem: It is often at the expense of performance and can be very costly to scale up. So, how can firms get improved performance along with scale and fault tolerance?

Elastic Caching Platforms Balance Performance With Scalability And Availability

Read more

SAP Middleware Directions: More Open Source, In-Memory Stuff

John R. Rymer

At the 15 March press and analyst Q&A by SAP co-CEOs Jim Hagemann Snabe and Bill McDermott, new middleware boss Vishal Sikka shed more light on the company's intentions for NetWeaver. Many of SAP's business applications customers use NetWeaver, both as a foundation for SAP's applications and to extend those applications using integration, portals, and custom developed apps. For about a year, the question has been how much additional investment SAP will put into NetWeaver.

Sikka made two comments that indicate how he's thinking about the NetWeaver portfolio.

1. In response to my question about whether SAP is concerned that Oracle's ownership of Java will put it at a disadvantage, Sikka started by highlighting SAP's work on Java performance, but then noted the availability of good open-source Java software to support the requirements of SAP customers.

Read more