- Forrester Councils
- Councils Overview
- log in
Posted by Rob Koplowitz on March 23, 2010
"Well, as of this moment, they're on double-secret probation!"
Dean Wormer, Faber College
Recently I have had a number of conversations regarding the role of pre-moderation of internal social networks. Just by way of explanation, pre-moderation would be the approval of all content (posts and comments) prior to posting. Over the past several years and hundreds of conversations with enterprise clients, this has rarely come up.
Just to be clear, there is risk associated with enterprise social networking. There is nothing about social technologies that precludes requirements for privacy, security, maintenance of intellectual capital, regulatory compliance, etc. However, given the right degree of attention, these all are manageable. In fact, over time, social technologies will reduce the risk associated with all of these (more on that later).
OK, so if anyone can say anything at anytime, that's risky right? Well, in thoery, but in reality, not really. Remember, we're talking about internal social networks. Presumably, these are IT sanctioned, authenticated solutions. In other words, everyone knows who you are. And, we can assume that with some degree of planning and education, your users will be aware of the policies that govern the environment. And if you post something not within policy, well you get put on probation (or maybe double-secret probation). Animal House references aside, many a fine internal social networking policy begins with "don't do anything that will get you fired".
There are three key points here:
All of this points to an environment that, with appropriate planning, is largely self managing. To further reduce risk through pre-moderation can be very costly. There is a large human capital cost associated with human intervention and approval of all content. The cost is perhaps higher in terms of the friction it creates in the social network that could cause frustration and gate adoption.
Now, how is that social media will ultimately lower risk associated with content? Well, if these systems are transparent and programmable, which they are, then we can envision a world where content can be monitored in real-time as it's posted. With smart vendors already working on applying policy and text analytic engines with rules engines there will emerge solutions that can monitor content and communications based on user roles and access control. The solutions will be able to identify the rare instances where a breach has occurred, quarantine the content and notify an administrator to handle the exception. Assuming exceptions will be rare, the cost in human time to intervene will be minimal.
However, even with today's enterprise social solutions, I have rarley run across the need for pre-moderation to reduce risk. Even in working with organizations with very stringent requirements for information management like defense contractors, government agencies, pharmas, and healthcare providers, social networks have been managed through a combination of policy, self-policing and post-moderation.
All that said, I recently came across a compelling case for pre-moderation.
Are you aware of internal social networks that are using pre-moderation? If so, does the cost/risk/benefit analysis justify the direction?
Lead BT Transformation
Develop customer-obsessed strategies to drive growth »
Forrester's CX Index
Predict how actions to improve CX will affect revenue performance.
Measure the customer experiences that matter most »
Free On-Demand and Live Events
Latest events from Forrester analysts, online and in person. »