EDGEWATER EXPERT’S CORNER: The Pros and Cons of Exposing Data Warehouse Content via Object Models

So you’re the one that’s responsible for your company’s enterprise reporting environment. Over the years, you have succeeded in building out a very stable and yet constantly expanding and diversifying data warehouse, a solid end-user reporting platform, great analytics and flashy corporate dashboards. You’ve done all the “heavy lifting” associated with integrating data from literally dozens of source systems into a single cohesive environment that has become the go-to source for any reporting needs.

Within your EDW, there are mashup entities that exist nowhere else in the corporate domain and now you are informed that some of the warehouse content you have created will be needed as source data for a new customer service site your company is creating.

So what options do you have to accommodate this? The two most common approaches that come to mind are: a) generating extracts to feed to the subscribing application on a scheduled basis; or b) just give the application development team direct access to the EDW tables and views. Both methods have no shortage of pros and cons.

  • Extract Generation – Have the application development team identify the data they want up front and as a post-process to your nightly ETL run cycles, dump the data to the OS and leave consuming it up to the subscribing apps.
Pros Cons
A dedicated extract is a single daily/nightly operation that will not impact other subscribers to the warehouse. You’re uncomfortable publishing secure content to a downstream application environment that may not have the same stringent user-level security measures in place as the EDW has.
Application developers will not be generating ad hoc queries that could negatively impact performance for other subscribing users’ reporting operations and analytics activity. Generating extracts containing large amounts of content may not be the most efficient method for delivering needed information to subscribing applications.
Nightly dumps or extracts will only contain EDW data that was available at the time the extracts were generated and will not contain the near- real-time content that is constantly being fed to the EDW – and that users will likely expect.
  • Direct Access – Give the subscribing application developers access to exposed EDW content directly so they can query tables and views for the content they want as they need it.

 

Pros Cons
It’s up to the application development team to get what they need, how they need it and when they need it. You’re uncomfortable exposing secure content to application developers that may not have the same stringent user-level security measures in place as the EDW has.
More efficient than nightly extracts as the downstream applications will only pull data as needed. Application developers will be generating ad hoc queries that could negatively impact performance for other subscribing users’ reporting operations and analytics activity.
Near-real-time warehouse content will be available for timely consumption by the applications.

 

While both of the above options have merits, they also have a number of inherent limitations – with data security being at the top of the list. Neither of these approaches enforces the database-level security that is already implemented explicitly in the EDW – side-stepping this existing capability will force application developers to either reinvent that wheel or implement some broader, but generally less stringent, application-level security model.

There is another option, though, one we seldom consider as warehouse developers. How about exposing an object model that represents specific EDW content consistently and explicitly to any subscribing applications? You may need to put on your OLTP hat for this one, but hear me out.

The subscribing application development team would be responsible to identify the specific objects (collections) they wish to consume and would access these objects through a secured procedural interface. On the surface, this approach may sound like you and your team will get stuck writing a bunch of very specific custom procedures, but if you take a step back and think it through, the reality is that your team can create an exposed catalog of rather generic procedures, all requiring input parameters, including user tokens – so the EDW security model remains in charge of exactly which data is returned to which users on each retrieval.

The benefits of this approach are numerous, including:

  • Data Security – All requests leverage the existing EDW security model via a user token parameter for every “Get” method.
  • Data Latency – Data being delivered by this interface is as current as it is in the EDW so there are no latency issues as would be expected with extracted data sets.
  • Predefined Get Methods – No ad hoc or application-based SQL being sent to the EDW. Only procedures generated and/or approved by the EDW team will be hitting the database.
  • Content Control – Only the content that is requested is delivered. All Get methods returning non-static data will require input parameter values for any required filtering criteria – all requests can be validated.
  • Data Page Control – Subscribing applications will not only be responsible for identifying what rows they want via input parameters, but also how many rows per page to keep network traffic in check.
  • EDW Transaction Logging – An EDW transaction log can be implemented with autonomous logging that records every incoming request, the accompanying input parameters, the number of rows returned and the duration it took for the transaction to run. This can aid performance tuning for the actual request behaviors from subscribing applications.
  • Object Reuse – Creation of a generic exposed object catalog will allow other applications to leverage the same consistent set of objects providing continuity of data and interface across all subscribing applications.
  • Nested and N Object Retrieval – Creation of single Get methods that can return multiple and/or nested objects in a single database call.
  • Physical Database Objects – All consumable objects are physically instantiated in the database as user-defined types based on native database data types or other user-defined types.
  • Backend Compatibility – Makes no difference what type of shop you are, i.e.; Oracle, Microsoft, IBM, PostgreSQL or some other mainstream RDBMS; conceptually, the approach is the same.
  • Application Compatibility – This approach is compatible with both Java and .NET IDE’s, as well as other application development platforms.
  • Reduced Data Duplication – Because data is directly published to subscribing applications, there is no need for subscribers to store that detail content in their transactional database, just key value references.

There are also a few Cons that also need to be weighed when considering this path:

  • EDW Table Locks – the warehouse ETL needs to be constructed so that tables that are publishing to the object model are not exclusively locked during load operations. This eliminates brown-out situations for subscribing applications.
  • Persistent Surrogate Keys – EDW tables that are publishing data to subscribing applications via the object model will need to have persistent surrogate primary keys so that subscribing applications can locally store key values obtained from the publisher and leverage the same key values in future operations.
  • Application Connection/Session Pooling – Each application connection (session) to the EDW will need to be established based on an EDW user for security to persist to the object model, so no pooling of open connections.
  • Reduced Data Duplication – This is a double-edged sword in this context because subscribing applications will not be storing all EDW content locally. As a result, there may be limitations to the reporting operations of subscribing applications. However, the subscribing applications can also be downstream publishers of data to the same EDW and can report from there. Additionally, at the risk of convoluting this particular point, I would also point out that “set” methods can also be created which would allow the subscribing application(s) to publish relevant content directly back to the EDW, thus eliminating the need for batch loading back to the EDW from subscribing application(s). Probably a topic for another day, but I wanted to put it out there.

 

So, does that sound like something that you may just want to explore? For more information on this or any of our offerings, please do not hesitate to reach out to us at makewaves@edgewater.com. Thanks!

Empowering digital transformation together at IASA 2017

Our Edgewater Insurance team is packing their bags and is excited to participate in IASA 2017, June 4-7 in Orlando, Florida. We’re proud to, once again, participate in a forum that brings together so many varied professionals in the insurance industry who are passionate about being prepared for the unprecedented change that is sweeping the industry. We look forward to meeting you there to show how our deep expertise, delivering solutions built on trusted technologies, can help you transform your business to become more competitive in a digital world.

Come and see how our experienced team of consultants can help your organization drive change

In industry, technology has often served as a catalyst for modernization, but within insurance we need to do more to understand the consumer to drive change. More than any other opportunity today, CEOs are focused on how to leverage digital technologies within their companies. But there’s still a lot of noise about what digital transformation even means. At Edgewater, we have a unique perspective based on our 25 years of working with insurance carriers. Our consulting team has spent many years working in the industry all the way from producers to adjusters, and in vendor management. We have a deep understanding of the business and technology trends impacting the industry as well as the all-important consumer trends. We know that transformation in any company can start big or of course, it can start small. From creating entirely new business models to remaking just one small business process in a way that delights your customer, changes their engagement model, or improves your speed to market.

We work with executives every day to create and implement their digital transformation strategies. At this event, we will be discussing how digital transformation needs to be at the top of each insurance carrier’s business strategy as the enabler that will bring together consumer, producer, and carrier. Attendees can come and experience first-hand how technology innovations are sweeping the industry, and how insurance carriers are progressing in their efforts to digitize through our interactive solution showcase. You will be able to explore solutions across many functional areas, including creating a unified experience for the consumer, enabling the producer to engage and add value, and how to learn and act on new insights by analyzing the data of transactions and behavior to create more personalized products and services.

But wait, you don’t have to wait

Get a sneak peek at the strategies we’ll be sharing at the event by downloading our Digital Transformation Quick Start Guide for Insurance Carriers at http://info.edgewater-consulting.com/insuranceguide. The guide is a starting point for how leaders should help their companies create and execute a customer engagement strategy. The Quick Start Guide will help you understand

  • What Digital Transformation is and what it is not
  • How producers should be using technology to connect with customers
  • How updating your web presence can improve how you engage with customers

See you there!

If you are planning to be at the event, visit our booth #1110 to meet our team and learn more about Edgewater’s solutions and consulting services for the Insurance industry. We’re excited to help you get started on your digital transformation journey.

Digital Transformation Starts with….Exploring the Possibilities. Here’s how

You can learn a lot about what digital transformation is, by first understanding what it is not. Digital transformation is not about taking an existing business process and simply making it digital – going paperless, as an example. Remaking manual processes reduces cost and increases productivity – no question – but the impact of these changes is not exactly transformative. At some point, you’ve squeezed as much efficiency as you can out of your current methods to the point where additional change has limited incremental value.

Digital transformation starts with the idea that you are going to fundamentally change an existing business model. This concept can seem large and ill-defined. Many executives struggle with where to even start. Half of the top six major barriers to digital transformation, according to CIO Insight, are directly related to a hazy vision for success: 1) no sense of urgency, 2) no vision for future uses, and 3) fuzzy business case.

 

It isn’t a big leap to imagine how Disney might be using the geolocation and transaction data from these bracelets to learn more about our preferences and activities in the park so they could better personalize our experience.

This MagicBand, as an example, immediately generates new expectations from customers that laggards in the industry have a hard time matching quickly.

 

 

At Edgewater, we worked with Spartan Chemical to create an innovative mobile application to drive customer loyalty. Spartan manufactures chemicals for cleaning and custodial services. They set themselves apart by working with us to build a mobile app that allows their customers to inspect, report on, and take pictures of the offices and warehouses they cleaned so that Spartan could easily identify and help the customer order the correct cleaning products.

Once you’ve defined your vision and decided where you will start, you should assess your landscape and determine the personas you will target with this new capability, product, or service.

At Edgewater, we help you create a digital transformation roadmap to define and implement strategy based on best practices in your industry.

To learn more:

You can rescue a failing IT project

If you work in the IT world, you’ve probably seen projects that have come off the rails and require a major course correction to get back on track. In this blog post, I will highlight the warning signs of a failing project from a recent client, along with the process we follow to get critical initiatives back on track.

Danger ahead!

This client was replacing an important legacy system as part of a long-term modernization program. The project had been in danger from the start:

  • High IT team turnover rate led to new hires that didn’t know the business
  • No strong project management on the team
  • Selected this project to initiate an Agile development approach
  • No Product Owner to represent the needs of the business

After two years only one major module had been delivered and the updated project timeline was three times longer than the original schedule. The alarming and unexpected extension of the timeline was the motivation our client needed to contact Edgewater for help.

Project Assessment

Our first step was to conduct an assessment of the project to better understand:

  • Major risks
  • Staffing and capabilities
  • The estimation approach
  • User involvement
  • Agile adoption

In this case, the findings clearly indicated a project at a high risk of failure.

Recommendations

Given the determination of “high risk”, Edgewater recommended some bold changes:

  • Establishing a realistic project schedule with achievable milestones
  • Hiring a full-time Product Owner to lead the requirements effort and build the backlog
  • Doubling the size of the IT development team to increase productivity and reduce the timeline
  • Using a blended team of full-time resources and consultants
  • Adding a full-time Project Manager/Scrum Master to lead the Agile development team, keep the project on schedule, and provide reporting to senior management

Initial results

After the first six months, the results are very promising:Productivity-for-PR

  • The project timeline has been cut in half
  • The development team has increased productivity by over 50% and has delivered modules on schedule
  • The requirements backlog has doubled
  • The client IT team is learning best practices so they will be able to support and enhance the system on their own
  • The Project Manager is mentoring the team on Agile roles and responsibilities, and managing the development team

Our client is extremely happy with the productivity improvements, and the users are excited to work on this project.  There’s still a long way to go, but the project rescue has been a success.

To learn more, watch our video then contact kparks@edgewater.com.

Top 5 Warning Signs you are on the ERP Desert Highway

desert carThere are many wrong turns on the road to the Desert of ERP Disillusionment.  Some teams go wrong right out of the gate. Here are the top five warning signs that your real destination is not the pinnacle of ERP success, but the dry parched sands of the desert.

1. Your steering committee is texting while driving. If your key decision makers are multi-tasking through every steering committee session, its easy for them to miss critical information they need to actually steer.

2. The distraction of backseat squabbling causes the PM to miss a turn.  Political infighting and lack of alignment among key stakeholders can be as difficult to manage as any carful of kids on a family roadtrip AFTER you have taken away their favorite electronic toys.

3. The driver is looking in the rearview mirror instead of the road ahead.  While there are some lessons to be learned from your last ERP implementation (how long ago was that?) , modern state of the art systems require significant behavior changes in the way users interact with information in the system.   If they are used to greenbar reports laid on their desks every morning, the gap may be too big to jump. 

4. You read a guidebook about the wilderness once….  You can’t learn all your survival skills from a book.  In life threatening terrain, there is no substitute for having an experienced guide on the team.  If you haven’t put experienced change leadership into place before you bid your consultants goodbye, you will have neither the insight to recognize the warning signs, nor the skill to lead your people out of the desert.

5. You ran out of gas!  You didn’t fill up at the last station because the ATM was out of cash, your credit card is maxxed out,  and you used your last dollars on Slurpees and Twizzlers for the kids.  If you fritter away your project budget on non-value added-customizations like moving fields on forms and cosmetic report changes, you won’t have money left to address any business critical requirements that come up late in the game.

(Hat tip to Mark Farrell for #5!)

Project Triage During Rapid Business Change Cycles

A few years ago, we ran a series of blog posts on project triage, diagnosis and rescue:

How often do you perform project triage?triage

Preparing for Project Rescue: Diagnosis

Restoring Projects to Peak Performance

In much of our work since then, we have been working with organizations that struggle with performing meaningful project interventions to align their project portfolio with sudden shifts in business strategy, or to support their underlying corporate culture as it shifts toward more rapid innovation, originality, adaptability, engagement, collaboration and efficacy.

In such fluid business environments, our original medical metaphor doesn’t fully apply; triage and diagnosis were performed from a perspective of project internals.  In today’s world, the old project success indicators can be very much out of sync with the business.  If IT projects, the project portfolio, and a PMO are not accountable in terms of their value to the business, it’s time to change the ways we think and talk about projects, and begin to define new KPI’s for success.

  • First of all, let’s stop using the term scope creep.  To deliver business value, the project organization must be agile enough to rapidly address scope fluidity. Would it make more sense to measure how quickly a project team can replan/re-estimate a shift in scope?
  • Quality metrics may also need to fall by the wayside. Is the current release good enough to push into production with defects noted, and expectations managed–think of the release as a minimum viable product, like lean startups do?
  • In rapidly changing businesses, it’s very difficult to plan out a 12 month milestone plan for IT projects. It makes more sense to define a backlog of objectives at the beginning of the planning phase, and perform rolling prioritization, with the complete expectation that the prioritization will change at multiple points during the coming year. In such an environment, how meaningful is it to judge project success against the old notion of “on time”?

In the context of all of this change, it is no longer reasonable to judge projects based on their internal conditions. The measures of project success in today’s world lie in the greater business context.

  • Has the project or project portfolio enabled the business to respond to threats and opportunities more rapidly?
  • Has it increased our innovation speed?
  • Even if the application is buggy, has it improved overall efficiency, enhanced the quality of goods and services, reduced operating costs, or improved the business’ relationship to its customers?

While these questions have answers that are less quantifiable, they are certainly more meaningful in today’s business context. How is your business evaluating project success these days?

Happy Holidays

hotcocoWith the holidays quickly approaching, we reflect on this time of appreciation. Edgewater would like to take this opportunity to thank you for reading our blog and following our thoughts. Whether you are a client, a partner, a team member, or a reader, we hope that you find peace and enjoyment during this holiday season.

May the holidays and the new year be healthy and happy for you and your family. We look forward to sharing with you all in the coming year. See you in 2013!

10 Best New Features of SharePoint 2013

The new SharePoint 2013 was just reached “Release To Manufacturing” stage! It is available for download now to MSDN subscribers and slated to be officially released in Q1 2013.

To celebrate, we thought to share some of the highlights in this upcoming release. While SP13 builds nicely on the foundation of previous versions, it does offer plenty of cool new features / improvements for business users to get excited about.

So here are the top 10 in no specific order.

  1. Cloud First: while SharePoint was part of Office 365 for some time now, it was a limited experience. SP13 is promising the full experience in the cloud + regular release of improvements and enhancements.
  2. The Newsfeed: taking the best from Facebook and Twitter, the new Newsfeed is the centerpiece of SP13 social push. The foundation was there in SP10 but you needed an external component like NewsGator to make it work. Now you’ll be able to build your network, follow colleagues and post / search the newsfeed at different organizational levels. #hashtags for all! For more…
  3. Communities: the other new social feature is the ability to create communities. A community (as separated from a project team) is for getting a group of people to collaborate more freely around a topic and share expertise. Built around Discussions, it expands them into seeing members, their contributions and allows easy formation of expert communities. For more…
  4. Cross site publishing allows for the first time to share content across sites, site collections, applications and even farms. We built a custom solution for this for an insurance company that wanted to post new forms to the public site, Agent portal and Intranet in a single action. Now it is built in. For more….
  5. Search had received a major upgrade. The acquisition of FAST was finally integrated into the main SharePoint search resulting in a long list of great improvements such as: Search for conversations, videos and reports, visual results and in-page previews, context sensitive sorting, advanced filters and of course, better performance, API’s etc. For more…
  6. SharePoint Apps!: one of the major changes to SP13 is the concept of apps. Apps are just like they sound, web applications that can be packaged so users can add them to pages or use them from within SharePoint. Not that different from the concept of solution packs before (line the Famous Fab 40 that were discontinued in SP10..) of packaging your web app in a web part. The new model does have a few advantages. It gives users more control on apps to use and while IT can still approve apps, they do not need to install them for users. It can also make internal applications easier to find and reduce redundancy. For more on apps see the Microsoft SharePoint apps blog.
  7. Simple project / task management: for complex project management you still have project server but it is an overkill for most simple projects. The new team site template includes the ability to manage tasks, deadlines and a simple work breakdown structure for a project team. It generates a personal and a group view of tasks and timelines perfect for keeping everyone on time. For more.,..
  8. Enterprise eDiscovery: one of the essential requirements for ECM in this age is a good eDiscovery mechanism to ensure content related to litigation or information requests can be executed efficiently and across all information repositories. SP13 is adding a new eDiscovery center that would make this a lot easier. For more…
  9. New Usage Analytics and useful views: Microsoft is replacing the SharePoint analytics with 2 new tools: search analytics and usage analytics. Usage analytics provide more detailed view of how SharePoint is used and even better, adds up to 12 cutom events to be added and tracked without custom tagging. You can also use the data collected from these tools for useful views such as Most Popular, Popular Searches ect. For more ..
  10. Better support for digital assets: there is no longer a need to create a special media library for digital assets. Once enabled, audio, video and other rich media can be added to any library. For more…

Processes (Workflows) Best Practices

When enabling new workflow options in Microsoft Dynamics CRM 2011, the overall performance of the implementation can be affected. Keep in mind the following best practices, when considering how to ensure that Microsoft Dynamics CRM workflow functionality performs optimally for a particular implementation.

  • Determine the business purposes for implementing workflow prior to enabling the functionality. During planning, analyze the business scenario and define the goals of workflow within the solution. Workflow functionality can provide for businesses’ process automation, exception handling, and end-user alerts.

Decide on the appropriate security/permissions model for workflow. With established business goals in place, determine the scope of users that will be affected by the workflow implementation. Users should be identified to determine who will create and maintain workflows, apply and track workflows, and troubleshoot workflow issues.

  • Use the Scope property sensibly. The Scope property associated with workflow rules defines the extent of records affected by that rule. For example, rules configured with a User scope affect only the records owned by that user, while rules configured with an Organization scope will affect all records within organization, regardless of which user owns each record. When creating workflows make sure to identify the appropriate scope value for each workflow rule to minimize the number of related system events.
  • Take into consideration the overall load associated with workflow within a deployment. Think about the number of instances that each workflow definition triggers. Then also consider these factors which affect load of workflows:
    • number of workflows
    • which entities
    • number of records
    • data size
    • data load

Considering the factors above in the system on a typical day provides for better comprehension of the processes and load variance. Based on this analysis, the workflows can be optimized as required.

  • Review workflow logic wisely. Consider the following factors:
    • Workflows that include infinite loopbacks, due to semantic or logic errors never terminate through normal means, therefore greatly affect overall workflow performance.
    • When implementing workflow functionality within a CRM 2011 deployment, be sure to review the logic in workflow rules and any associated plug-ins for potential loopback issues.
    • As part of ongoing maintenance efforts, periodically publish workflow rules and review them to ensure that duplicated workflow rules are not affecting the same records.
  • When defining workflows that are triggered on update events, be cautious. Taking into account the frequency at which ‘Update’ events occur, be very particular in specifying which attributes the system looks for to trigger updates. Also, avoid using ’Wait’ states in workflows that are triggered on Update events.
  • Scale out as necessary, to improve performance in large deployments. Use dedicated computers to run the Async service for large-scale deployments. That being said, increasing the number of servers running the Async service creates additional stress on the server running Microsoft SQL Server. Therefore, make sure to follow appropriate optimization and tuning strategies on the data tier and investigate the possibility of increasing the number of computers running Microsoft SQL server.
  • Test workflows. Make sure to test and monitor the performance of new workflow functionality before implementing it in a production environment.
  • Async plug-ins. Think through whether plug-ins should run synchronously or asynchronously. When the priority is user responsiveness, running a plug-in asynchronously will enable the user interface to respond quicker to the user. But, asynchronous plug-ins introduce added load to the server to persist the asynchronous operation to the database and process by the Async Service. When scalability is essential, running plug-ins synchronously typically requires fewer loads on the servers than running plug-ins asynchronously.
  • Balancing workflows and asynchronous plug-ins. Asynchronous plug-in and workflow records in the asyncoperationbase table are managed with the same priority. Hence, introducing a large number of asynchronous plug-ins into a system can reduce overall throughput or increase the time between triggering and processing of individual workflows. For that reason, make sure to consider the relative importance of the workflows in the system before adding numerous asynchronous plug-ins to the solution.
  • Child Workflows. Child workflows run as independent workflow instances from their parents. This can facilitate parallel processing on a system with spare capacity, which can be useful for workflows with multiple independent threads of high processing activity. Additional overhead can be introduced if the parallel processing is not critical because other workflow logic threads are blocked waiting for external events to occur.

NOTE: If workflow functionality within a CRM 2011 implementation is not acting as expected, verify that the Async service is running properly. Often, restarting the Async service will unblock workflow processing without affecting the functionality of the workflows that were in the pipeline.

  • Monitor the Microsoft Dynamics CRM 2011 database for excess workflow log records. Workflow processing in Microsoft Dynamics CRM depends on on the Asynchronous Service, which logs its activity in both the AsyncOperationBase table and WorkflowLogBase tables. Performance may be affected as the number of workflow records in the CRM 2011 database grows over time.

Microsoft Dynamics CRM 2011 includes two specific settings, ‘AsyncRemoveCompletedJobs’ and ‘AsyncRemoveCompletedWorkflows’, which can be configured to ensure that Asynchronous Service automatically removes log entries from the AsyncOperationBase and WorkflowLogBase tables. These settings are as follows:

    • The ‘AsyncRemoveCompletedWorkflows’ setting is visible to users in the interface for defining new workflows, and users can set the Removal flag independently on each of the workflows they define.NOTE: When registering an Async plug-in, users can also specify that successfully completed Async Plugin execution records be deleted from the system.
    • Using the deployment Web service, users can change the AsyncRemoveCompletedJobs setting by. Nonetheless, the setting is by default configured to True, which ensures automatic removal of entries for successfully completed jobs from the AsyncOperationBase table.

Is the 1-9-90 rule for social participation dead?

It has long been an axiom that getting people to participate in online communities is hard, and the 1/9/90 rule helped explain why. 1% will be die-hard content creators, 9% will participate and 90% will be passive consumers and sit on the sidelines.

A recent BBC study claims the old rules are dead and that a whopping 77% of adults should be considered participators in some capacity. Interestingly, GigaOm pounced and claimed the old rules still apply.

I think the BBC research is on to something and that the online participation patterns have changed. Few of the things may have contributed:

  • Consolidation: social networks such as Facebook and Twitter consolidate for us updates and posts from multiple communities and allow us to respond directly from there. You no longer need to go and check on 7 different communities to see what is going on.
  • Ease of content creation and sharing especially from mobile devices. Probably too easy if you ask me. if you allow it, your phone will post your location, the pictures you take and more without even asking. The success of Instagram is just one example. Being connected 100% of the time allows us to interact 100% of the day.
  • We are not anonymous anymore. It has been a slow change but if the late 90’s were about virtual identities and avatars, now we interact as real people. It may look like a small change but the whole nature of online interaction shifted from an outlet to interactions we wanted to have outside of our normal (and sometimes restrictive) social circle to where now most of the online interaction is with our social circle. More and more the online communities and social networks augment and extend our real relationships with people and brands.
  • While some people who came to the party felt a bit out of place and stayed close to the wall for a while. After some time you realize that keeping to yourself in a social setting is not very nice and that people actually notice. If you are part of the community, participation is now expected.

So if the BBC is right and we should be expecting more participation what does it mean for businesses?

Business social participation may still be closer to the old rules because they do not reflect a close knit social group but as more people become comfortable in sharing it will start to have an impact.

Internally, collaboration and social networking with colleagues will eventually follow the same pattern of heightened participation if you allow the same enablers. Aggregate and consolidate activities and updates so they are easy to access, make it easy to respond to them and embed interaction and sharing everywhere in internal web applications, sites, tools etc. Making sharing a social norm may not be too far off.

Externally, in addition to the brand enthusiasts and deal seekers there is now a potential in making a lot more people participants

  • Think about creating content that people would want to share. Too many websites and social media sites focus on the marketing side “what we have to sell”. Cool or useful things to do with the product or that are just related to the category will more easily be viral.
  • Many websites have added sharing and likes to their pages but few take it to the level of actually allowing specific questions or comments through social networks on content or products.
  • Think mobile sharing. From QR codes in trade show booths to special coupons for scanning or photographing in the store. Even my dentist has a promotion for getting free whitening pen if you scan a code and like him on Facebook. Brilliant.