The following question comes from many of our clients: what are some of the advantages and risks of implementing a vendor provided analytical logical data model at the start of any Business Intelligence, Data Warehousing or other Information Management initiatives? Some quick thoughts on pros and cons:
Leverage vendor knowledge from prior experience and other customers
May fill in the gaps in enterprise domain knowledge
Best if your IT dept does not have experienced data modelers
May sometimes serve as a project, initiative, solution accelerator
May sometimes break through a stalemate between stakeholders failing to agree on metrics, definitions
May sometimes require more customization effort, than building a model from scratch
May create difference of opinion arguments and potential road blocks from your own experienced data modelers
May reduce competitive advantage of business intelligence and analytics (since competitors may be using the same model)
Goes against “agile” BI principles that call for small, quick, tangible deliverables
Goes against top down performance management design and modeling best practices, where one does not start with a logical data model but rather
Defines departmental, line of business strategies
Links goals and objectives needed to fulfill these strategies
Defines metrics needed to measure the progress against goals and objectives
Defines strategic, tactical and operational decisions that need to be made based on metrics
Slowly but surely, with lots of criticism and skepticism, the business intelligence (BI) software-as-a-service (SaaS) market is gaining ground. It's a road full of peril — at least two BI SaaS startups have failed this year — but what software market segment has not seen its share of failures? Although I do not see a stampede to replace traditional BI applications with SaaS alternatives in the near future, BI SaaS does have a few legitimate use cases even today, such as complementary BI, in coexistence with traditional BI, BI workspaces, and BI for small and some midsize businesses.
In our latest BI SaaS research report we recommend the following structured approach to see if BI SaaS is right for you and if you are ready for BI SaaS:
Map your BI requirements and IT culture to one of five BI SaaS use cases
Evaluate and consider scenarios where BI SaaS may be a right or wrong fit for you
Select the BI SaaS vendor that fits your business, technical, and operational requirements, including your tolerance for risk
First we identified 5 following BI SaaS use cases.
Coexistence case: on-premises BI complemented with SaaS BI in enterprises
SaaS-centric case in enterprises: main BI application in enterprises committed to SaaS
SaaS-centric case in midmarket: main BI application in midsized businesses
Elasticity case: BI for companies with strong variations in activity from season to season
Power user flexibility case: BI workspaces are often considered necessary by power analysts
Or: why “advanced persistent threat” is the wrong phrase
Google's revelation that it was hacked by (likely) Chinese actors has helped propel another round of stories, blog posts, and analyses about What It Means. I have participated in some of these discussions, and my colleague Chenxi Wang has written severalilluminating posts about the nature of the attacks.
The specific means of compromise, a zero-day Internet Explorer exploit, has raised awareness of a phenomenon referred to as the “Advanced Persistent Threat,” concisely described by Lockheed Martin’s Mike Cloppert as “any sophisticated adversary engaged in information warfare in support of long-term strategic goals.” In his posts, Mike also nearly always uses APT in conjunction with the word “actor” (as in: APT actor) because he means a particular adversary. Mike's definitions are important because they help clarify what APT is, and what it is not. Expanding on his definition a bit, here is what I believe APT is:
Just this week on Tuesday, NIST published release 1.0 of the smart grid interoperability standards. Most notably, this is the first attempt to address cyber security in smart grid deployments. This release points to various standards that can be used for implementing interoperability and security controls, and it’s fair to say that it plants the seeds for what should become comprehensive, control-driven guidelines for implementing various aspects of smart grid.
The timing of this report is perfect, as current smart grid rollouts are often criticized for lack of proper security controls. Our utility customers have shown similar concern about the lack of planning for information security before the roll out phase. This lack of security and risk management perspective in the smart grid ecosystem can jeopardize the overall objective of these smart energy initiatives, and it’s about time that we devise a game plan going forward.
The NIST publication will be an important piece of work as it brings various standards, bodies, and regulators like IEEE, NERC, and FERC to the table. Note, this is not a control based standard like others published by NIST, but a guideline to other frameworks that should be referenced when working in a smart ecosystem. A more control based work on cyber security in smart grid is in development and the draft of these standards is available for public review.
A few important highlights to pay close attention to in the cyber security sections are:
According to my friend Pete Lindstrom, the Information Systems Security Association (ISSA) is surveying its members for suggestions on three 2009 stories that, in retrospect, were the "most" of something. I'm not a member of the ISSA, but awards are fun, right? Here are my nominations:
Most significant breach of 2009: Heartland Payment Systems
Yes, this breach happened in 2008. But the story broke in 2009, so I'm counting it.The significance of the breach wasn't just the size (130 million credit card numbers). The story that surrounded the breach provoked some interesting debates about the role of PCI, the effectiveness of auditors, and the willingness of clients to QSA-shop, ignore advice, and blame third parties for their own failures.
Most overhyped story: "The cloud is insecure, m'kay?"
It is easy and appropriate -- today -- to discuss the risks assoociated with putting applications and data on semi-public devices you don't own. Criticizing is easy, but the fixing is more interesting. I predict that in time "the cloud" will be the best thing that has ever happened to information security, because it focuses attention on the data, not the infrastructure. Or to put it differently, it puts the "information" back into Information Security. This is exactly the discussion we need to have.