Assumptions Are a Necessary Evil

In over 27 years, I have never experienced a major problem on a systems implementation that did not begin with an assumption.

“Of course they can do it; they have a ton of experience.”
“Of course the development servers are being backed up.”
“Of course the new system can do that; it’s a tier 1 ERP- how can it not do it?”
“Of course there’s a compatible upgrade path; the vendor’s web site said so.”

Yeah, well, not always.

Fear the statement that begins, “Of course…”. From a handy web dictionary, assumption is defined as “A thing that is accepted as true or as certain to happen, without proof.”

So, assumptions are bad and should be eliminated. If you get rid of all assumptions, then you are good to go, right?

Yeah, well, not always.

Why? Because eliminating all assumptions takes time. It takes a lot of time and costs a ton of money.

Consider a project to select a new ERP system. A well architected project that includes a good process and the right level of participation from the right people generally takes six months for an average mid-sized manufacturer. If you hit that schedule, you have made a lot of assumptions, whether you know it or not. Why? Because if you try to eliminate every possible assumption, that same selection project would take years, if it could even be finished at all.

The pace of change within your technology environment, much less your business, as well as the tools you are considering, turns a nicely bounded selection project into a fruitless attempt to match your knowledge and certainty to things that are constantly evolving. There would be no end point in that scenario. By the time you have eliminated all assumptions, the people and technology have evolved from underneath all your hard-won knowledge.

So, we have a conundrum: if you make assumptions, you will screw up; yet if you don’t make assumptions, you cannot proceed. Your options appear to be limited. Certainly, there are situations that require eliminating all assumptions – I’m thinking here of building a space shuttle. But if you aren’t shooting for the moon with your project, what do you do?

You must make assumptions to move you forward, while balancing against overall risk. You may never get to the point where you make assumption your ally, but you can at least reach a cautious neutrality with them.

EDGEWATER EXPERT’S CORNER: Diving into the Deeper End of SQL – Part 1

SQL is something of a funny language insofar as most every developer I have ever met seems to believe they are “fluent” in it, but the fact of the matter is that most developers just wade around in the shallows and never really dive into the deep end. Instead, from time to time we get pushed into the deep end learning additional bits and pieces expanding our vocabulary to simply keep from drowning.

The real challenge here is that there are several dialects of SQL and multiple SQL based procedural languages (i.e. PL/SQL, T-SQL, Watcom-SQL, PLpg/SQL, NZPLSQL, etc.) and not everything you learn in one dialect is implemented the same in other dialects. In 1986 the ANSI/ISO SQL standard was created with the objective of SQL interoperability across RDBMS products. Unfortunately, since the inception of this standard and with every subsequent revision (8 in all) since, there are still no database vendors that adhere directly to that standard. Individual vendors instead choose to add their own extensions of the language to provide additional functionality. Some of these extensions go full circle and get folded into later versions of the standard and others remain product specific.

Something of a long winded introduction, but necessary for what I want to discuss. Over the coming months I will be posting some write-ups on the deeper end of SQL and discussing some topics that aimed at expanding our SQL vocabularies. Today, I want to talk about window functions. These were introduced as part of the 2003 revision to the ANSI/ISO SQL standard. Window functions are probably one of the most powerful extensions to SQL language ever introduced, and most developers – yes, even the ones that consider themselves fluent in SQL – have never even heard of them. The short definition of a window function is a function that allows us to perform a calculation or aggregate across set of rows within a partition of a dataset having something in common. Something of lack luster definition you say? I agree, but before you click away, take a peek at a couple of examples below and I am sure you’ll find something useful.

For starters, I would like to explain what a “window” of data is. Simply put, a window of data is a group of rows in a table or query with common partition-able attributes shared across rows. In the table below, I have highlighted 5 distinct windows of data. The windows in this example are based on a partition by department. In general data windows can be created with virtually any foreign key that repeats in a dataset or any other repeating value in a dataset. [Image]

Example 1: Ranked List Function – In this example using the RANK function, I will create a ranked list of employees in each department by salary.   Probably not the most exciting example, but think about alternate methods of doing the same with SQL and not having the RANK function and the simple query below gets really ugly….quick. [Image]

Example 2: Dense Ranked List Function – Similar to the RANK function, but the DENSE_RANK value is the same for members of the window having the same salary value. [Image]

Example 3: FIRST and LAST Functions – Using the first and last functions we can easily get the MIN and MAX salary values for the department window and include it with our ranked list. Yup, you are sitting on one row in the window and looking back to the first row and forward to the last row of the same window all at the same time!   No Cursors Needed!!! [Image]

Example 4: LEAD and LAG Functions – These two are without a doubt a couple of the most amazing functions you will ever use. The LAG function allows us to be sitting on one row in a data window and then look back at any previous row in the window of data. Conversely, the LEAD function allows us to be sitting on one row in a data window and then look forward at any upcoming row in the window of data.

Syntax:

LAG (value_expression [,offset] [,default]) OVER ([query_partition_clause] order_by_clause)

LEAD (value_expression [,offset] [,default]) OVER ([query_partition_clause] order_by_clause)

In the illustration below from within the context of the data window, I am looking up at the previous record and down at the next record and presenting that data as part of the current record. To look further ahead or behind in the same data window, simply change the value of the offset parameter. Prior to the introduction of these functions mimicking the same functionality without a cursor was essentially impossible and now with a single simple line of code, I can look up or down at other records from a record. Just too darn cool! [Image]

Example 5: LEAD and LAG Functions – Just another example of what you can do with the lead and lag functions to get you thinking. In this example, our billing system has a customer credit limit table where for each customer a single record is active and historical data is preserved in inactive records. We want to add this table to our data warehouse but bring it in as a type-2 dimension and need to end date and key all the records as part of the process. We could write a cursor and loop through the records multiple times to calculate the end date and then post them to the data warehouse…or using the LEAD function we can calculate the end date based on the create date of the next record in the window. The two illustrations depict the data in the source (billing system), then in the target data warehouse table.   All of this with just a dozen lines of SQL using window functions – How many lines of code would this take without using a window functions?

Data in source billing system. [Image]

Transformed data for load to data warehouse as T-2 dimension. [Image]

Example 6: LISTAGG Function – The LISTAGG function allows us to return values a column of columns for multiple rows as a single column, aka a “multi-valued field” – Remember PICK or Revelation? [Image]

One closing note; All of the examples shown were in Oracle, but the equilavent functionallity also exists in MS-SQL Server, IBM DB2 & Netezza and PostgreSQL.

So what do you think? Ready to dive into the deep end and try some of this? At Edgewater Consulting, we have over 25 years of successful database and data warehouse implementations behind us so if you’re still wading in the kiddie pool or worse yet, swimming with the sharks! – give us a call and we can provide you with a complimentary consultation with one of our database experts. To learn more about our consulting services, download our new Digital Transformation Guide.

EDGEWATER EXPERT’S CORNER: The Pros and Cons of Exposing Data Warehouse Content via Object Models

So you’re the one that’s responsible for your company’s enterprise reporting environment. Over the years, you have succeeded in building out a very stable and yet constantly expanding and diversifying data warehouse, a solid end-user reporting platform, great analytics and flashy corporate dashboards. You’ve done all the “heavy lifting” associated with integrating data from literally dozens of source systems into a single cohesive environment that has become the go-to source for any reporting needs.

Within your EDW, there are mashup entities that exist nowhere else in the corporate domain and now you are informed that some of the warehouse content you have created will be needed as source data for a new customer service site your company is creating.

So what options do you have to accommodate this? The two most common approaches that come to mind are: a) generating extracts to feed to the subscribing application on a scheduled basis; or b) just give the application development team direct access to the EDW tables and views. Both methods have no shortage of pros and cons.

  • Extract Generation – Have the application development team identify the data they want up front and as a post-process to your nightly ETL run cycles, dump the data to the OS and leave consuming it up to the subscribing apps.
Pros Cons
A dedicated extract is a single daily/nightly operation that will not impact other subscribers to the warehouse. You’re uncomfortable publishing secure content to a downstream application environment that may not have the same stringent user-level security measures in place as the EDW has.
Application developers will not be generating ad hoc queries that could negatively impact performance for other subscribing users’ reporting operations and analytics activity. Generating extracts containing large amounts of content may not be the most efficient method for delivering needed information to subscribing applications.
Nightly dumps or extracts will only contain EDW data that was available at the time the extracts were generated and will not contain the near- real-time content that is constantly being fed to the EDW – and that users will likely expect.
  • Direct Access – Give the subscribing application developers access to exposed EDW content directly so they can query tables and views for the content they want as they need it.

 

Pros Cons
It’s up to the application development team to get what they need, how they need it and when they need it. You’re uncomfortable exposing secure content to application developers that may not have the same stringent user-level security measures in place as the EDW has.
More efficient than nightly extracts as the downstream applications will only pull data as needed. Application developers will be generating ad hoc queries that could negatively impact performance for other subscribing users’ reporting operations and analytics activity.
Near-real-time warehouse content will be available for timely consumption by the applications.

 

While both of the above options have merits, they also have a number of inherent limitations – with data security being at the top of the list. Neither of these approaches enforces the database-level security that is already implemented explicitly in the EDW – side-stepping this existing capability will force application developers to either reinvent that wheel or implement some broader, but generally less stringent, application-level security model.

There is another option, though, one we seldom consider as warehouse developers. How about exposing an object model that represents specific EDW content consistently and explicitly to any subscribing applications? You may need to put on your OLTP hat for this one, but hear me out.

The subscribing application development team would be responsible to identify the specific objects (collections) they wish to consume and would access these objects through a secured procedural interface. On the surface, this approach may sound like you and your team will get stuck writing a bunch of very specific custom procedures, but if you take a step back and think it through, the reality is that your team can create an exposed catalog of rather generic procedures, all requiring input parameters, including user tokens – so the EDW security model remains in charge of exactly which data is returned to which users on each retrieval.

The benefits of this approach are numerous, including:

  • Data Security – All requests leverage the existing EDW security model via a user token parameter for every “Get” method.
  • Data Latency – Data being delivered by this interface is as current as it is in the EDW so there are no latency issues as would be expected with extracted data sets.
  • Predefined Get Methods – No ad hoc or application-based SQL being sent to the EDW. Only procedures generated and/or approved by the EDW team will be hitting the database.
  • Content Control – Only the content that is requested is delivered. All Get methods returning non-static data will require input parameter values for any required filtering criteria – all requests can be validated.
  • Data Page Control – Subscribing applications will not only be responsible for identifying what rows they want via input parameters, but also how many rows per page to keep network traffic in check.
  • EDW Transaction Logging – An EDW transaction log can be implemented with autonomous logging that records every incoming request, the accompanying input parameters, the number of rows returned and the duration it took for the transaction to run. This can aid performance tuning for the actual request behaviors from subscribing applications.
  • Object Reuse – Creation of a generic exposed object catalog will allow other applications to leverage the same consistent set of objects providing continuity of data and interface across all subscribing applications.
  • Nested and N Object Retrieval – Creation of single Get methods that can return multiple and/or nested objects in a single database call.
  • Physical Database Objects – All consumable objects are physically instantiated in the database as user-defined types based on native database data types or other user-defined types.
  • Backend Compatibility – Makes no difference what type of shop you are, i.e.; Oracle, Microsoft, IBM, PostgreSQL or some other mainstream RDBMS; conceptually, the approach is the same.
  • Application Compatibility – This approach is compatible with both Java and .NET IDE’s, as well as other application development platforms.
  • Reduced Data Duplication – Because data is directly published to subscribing applications, there is no need for subscribers to store that detail content in their transactional database, just key value references.

There are also a few Cons that also need to be weighed when considering this path:

  • EDW Table Locks – the warehouse ETL needs to be constructed so that tables that are publishing to the object model are not exclusively locked during load operations. This eliminates brown-out situations for subscribing applications.
  • Persistent Surrogate Keys – EDW tables that are publishing data to subscribing applications via the object model will need to have persistent surrogate primary keys so that subscribing applications can locally store key values obtained from the publisher and leverage the same key values in future operations.
  • Application Connection/Session Pooling – Each application connection (session) to the EDW will need to be established based on an EDW user for security to persist to the object model, so no pooling of open connections.
  • Reduced Data Duplication – This is a double-edged sword in this context because subscribing applications will not be storing all EDW content locally. As a result, there may be limitations to the reporting operations of subscribing applications. However, the subscribing applications can also be downstream publishers of data to the same EDW and can report from there. Additionally, at the risk of convoluting this particular point, I would also point out that “set” methods can also be created which would allow the subscribing application(s) to publish relevant content directly back to the EDW, thus eliminating the need for batch loading back to the EDW from subscribing application(s). Probably a topic for another day, but I wanted to put it out there.

 

So, does that sound like something that you may just want to explore? For more information on this or any of our offerings, please do not hesitate to reach out to us at makewaves@edgewater.com. Thanks!

Empowering digital transformation together at IASA 2017

Our Edgewater Insurance team is packing their bags and is excited to participate in IASA 2017, June 4-7 in Orlando, Florida. We’re proud to, once again, participate in a forum that brings together so many varied professionals in the insurance industry who are passionate about being prepared for the unprecedented change that is sweeping the industry. We look forward to meeting you there to show how our deep expertise, delivering solutions built on trusted technologies, can help you transform your business to become more competitive in a digital world.

Come and see how our experienced team of consultants can help your organization drive change

In industry, technology has often served as a catalyst for modernization, but within insurance we need to do more to understand the consumer to drive change. More than any other opportunity today, CEOs are focused on how to leverage digital technologies within their companies. But there’s still a lot of noise about what digital transformation even means. At Edgewater, we have a unique perspective based on our 25 years of working with insurance carriers. Our consulting team has spent many years working in the industry all the way from producers to adjusters, and in vendor management. We have a deep understanding of the business and technology trends impacting the industry as well as the all-important consumer trends. We know that transformation in any company can start big or of course, it can start small. From creating entirely new business models to remaking just one small business process in a way that delights your customer, changes their engagement model, or improves your speed to market.

We work with executives every day to create and implement their digital transformation strategies. At this event, we will be discussing how digital transformation needs to be at the top of each insurance carrier’s business strategy as the enabler that will bring together consumer, producer, and carrier. Attendees can come and experience first-hand how technology innovations are sweeping the industry, and how insurance carriers are progressing in their efforts to digitize through our interactive solution showcase. You will be able to explore solutions across many functional areas, including creating a unified experience for the consumer, enabling the producer to engage and add value, and how to learn and act on new insights by analyzing the data of transactions and behavior to create more personalized products and services.

But wait, you don’t have to wait

Get a sneak peek at the strategies we’ll be sharing at the event by downloading our Digital Transformation Quick Start Guide for Insurance Carriers at http://info.edgewater-consulting.com/insuranceguide. The guide is a starting point for how leaders should help their companies create and execute a customer engagement strategy. The Quick Start Guide will help you understand

  • What Digital Transformation is and what it is not
  • How producers should be using technology to connect with customers
  • How updating your web presence can improve how you engage with customers

See you there!

If you are planning to be at the event, visit our booth #1110 to meet our team and learn more about Edgewater’s solutions and consulting services for the Insurance industry. We’re excited to help you get started on your digital transformation journey.

Digital Transformation Starts with….Exploring the Possibilities. Here’s how

You can learn a lot about what digital transformation is, by first understanding what it is not. Digital transformation is not about taking an existing business process and simply making it digital – going paperless, as an example. Remaking manual processes reduces cost and increases productivity – no question – but the impact of these changes is not exactly transformative. At some point, you’ve squeezed as much efficiency as you can out of your current methods to the point where additional change has limited incremental value.

Digital transformation starts with the idea that you are going to fundamentally change an existing business model. This concept can seem large and ill-defined. Many executives struggle with where to even start. Half of the top six major barriers to digital transformation, according to CIO Insight, are directly related to a hazy vision for success: 1) no sense of urgency, 2) no vision for future uses, and 3) fuzzy business case.

 

It isn’t a big leap to imagine how Disney might be using the geolocation and transaction data from these bracelets to learn more about our preferences and activities in the park so they could better personalize our experience.

This MagicBand, as an example, immediately generates new expectations from customers that laggards in the industry have a hard time matching quickly.

 

 

At Edgewater, we worked with Spartan Chemical to create an innovative mobile application to drive customer loyalty. Spartan manufactures chemicals for cleaning and custodial services. They set themselves apart by working with us to build a mobile app that allows their customers to inspect, report on, and take pictures of the offices and warehouses they cleaned so that Spartan could easily identify and help the customer order the correct cleaning products.

Once you’ve defined your vision and decided where you will start, you should assess your landscape and determine the personas you will target with this new capability, product, or service.

At Edgewater, we help you create a digital transformation roadmap to define and implement strategy based on best practices in your industry.

To learn more:

You can rescue a failing IT project

If you work in the IT world, you’ve probably seen projects that have come off the rails and require a major course correction to get back on track. In this blog post, I will highlight the warning signs of a failing project from a recent client, along with the process we follow to get critical initiatives back on track.

Danger ahead!

This client was replacing an important legacy system as part of a long-term modernization program. The project had been in danger from the start:

  • High IT team turnover rate led to new hires that didn’t know the business
  • No strong project management on the team
  • Selected this project to initiate an Agile development approach
  • No Product Owner to represent the needs of the business

After two years only one major module had been delivered and the updated project timeline was three times longer than the original schedule. The alarming and unexpected extension of the timeline was the motivation our client needed to contact Edgewater for help.

Project Assessment

Our first step was to conduct an assessment of the project to better understand:

  • Major risks
  • Staffing and capabilities
  • The estimation approach
  • User involvement
  • Agile adoption

In this case, the findings clearly indicated a project at a high risk of failure.

Recommendations

Given the determination of “high risk”, Edgewater recommended some bold changes:

  • Establishing a realistic project schedule with achievable milestones
  • Hiring a full-time Product Owner to lead the requirements effort and build the backlog
  • Doubling the size of the IT development team to increase productivity and reduce the timeline
  • Using a blended team of full-time resources and consultants
  • Adding a full-time Project Manager/Scrum Master to lead the Agile development team, keep the project on schedule, and provide reporting to senior management

Initial results

After the first six months, the results are very promising:Productivity-for-PR

  • The project timeline has been cut in half
  • The development team has increased productivity by over 50% and has delivered modules on schedule
  • The requirements backlog has doubled
  • The client IT team is learning best practices so they will be able to support and enhance the system on their own
  • The Project Manager is mentoring the team on Agile roles and responsibilities, and managing the development team

Our client is extremely happy with the productivity improvements, and the users are excited to work on this project.  There’s still a long way to go, but the project rescue has been a success.

To learn more, watch our video then contact kparks@edgewater.com.

Top 5 Warning Signs you are on the ERP Desert Highway

desert carThere are many wrong turns on the road to the Desert of ERP Disillusionment.  Some teams go wrong right out of the gate. Here are the top five warning signs that your real destination is not the pinnacle of ERP success, but the dry parched sands of the desert.

1. Your steering committee is texting while driving. If your key decision makers are multi-tasking through every steering committee session, its easy for them to miss critical information they need to actually steer.

2. The distraction of backseat squabbling causes the PM to miss a turn.  Political infighting and lack of alignment among key stakeholders can be as difficult to manage as any carful of kids on a family roadtrip AFTER you have taken away their favorite electronic toys.

3. The driver is looking in the rearview mirror instead of the road ahead.  While there are some lessons to be learned from your last ERP implementation (how long ago was that?) , modern state of the art systems require significant behavior changes in the way users interact with information in the system.   If they are used to greenbar reports laid on their desks every morning, the gap may be too big to jump. 

4. You read a guidebook about the wilderness once….  You can’t learn all your survival skills from a book.  In life threatening terrain, there is no substitute for having an experienced guide on the team.  If you haven’t put experienced change leadership into place before you bid your consultants goodbye, you will have neither the insight to recognize the warning signs, nor the skill to lead your people out of the desert.

5. You ran out of gas!  You didn’t fill up at the last station because the ATM was out of cash, your credit card is maxxed out,  and you used your last dollars on Slurpees and Twizzlers for the kids.  If you fritter away your project budget on non-value added-customizations like moving fields on forms and cosmetic report changes, you won’t have money left to address any business critical requirements that come up late in the game.

(Hat tip to Mark Farrell for #5!)

Project Triage During Rapid Business Change Cycles

A few years ago, we ran a series of blog posts on project triage, diagnosis and rescue:

How often do you perform project triage?triage

Preparing for Project Rescue: Diagnosis

Restoring Projects to Peak Performance

In much of our work since then, we have been working with organizations that struggle with performing meaningful project interventions to align their project portfolio with sudden shifts in business strategy, or to support their underlying corporate culture as it shifts toward more rapid innovation, originality, adaptability, engagement, collaboration and efficacy.

In such fluid business environments, our original medical metaphor doesn’t fully apply; triage and diagnosis were performed from a perspective of project internals.  In today’s world, the old project success indicators can be very much out of sync with the business.  If IT projects, the project portfolio, and a PMO are not accountable in terms of their value to the business, it’s time to change the ways we think and talk about projects, and begin to define new KPI’s for success.

  • First of all, let’s stop using the term scope creep.  To deliver business value, the project organization must be agile enough to rapidly address scope fluidity. Would it make more sense to measure how quickly a project team can replan/re-estimate a shift in scope?
  • Quality metrics may also need to fall by the wayside. Is the current release good enough to push into production with defects noted, and expectations managed–think of the release as a minimum viable product, like lean startups do?
  • In rapidly changing businesses, it’s very difficult to plan out a 12 month milestone plan for IT projects. It makes more sense to define a backlog of objectives at the beginning of the planning phase, and perform rolling prioritization, with the complete expectation that the prioritization will change at multiple points during the coming year. In such an environment, how meaningful is it to judge project success against the old notion of “on time”?

In the context of all of this change, it is no longer reasonable to judge projects based on their internal conditions. The measures of project success in today’s world lie in the greater business context.

  • Has the project or project portfolio enabled the business to respond to threats and opportunities more rapidly?
  • Has it increased our innovation speed?
  • Even if the application is buggy, has it improved overall efficiency, enhanced the quality of goods and services, reduced operating costs, or improved the business’ relationship to its customers?

While these questions have answers that are less quantifiable, they are certainly more meaningful in today’s business context. How is your business evaluating project success these days?

Happy Holidays

hotcocoWith the holidays quickly approaching, we reflect on this time of appreciation. Edgewater would like to take this opportunity to thank you for reading our blog and following our thoughts. Whether you are a client, a partner, a team member, or a reader, we hope that you find peace and enjoyment during this holiday season.

May the holidays and the new year be healthy and happy for you and your family. We look forward to sharing with you all in the coming year. See you in 2013!

10 Best New Features of SharePoint 2013

The new SharePoint 2013 was just reached “Release To Manufacturing” stage! It is available for download now to MSDN subscribers and slated to be officially released in Q1 2013.

To celebrate, we thought to share some of the highlights in this upcoming release. While SP13 builds nicely on the foundation of previous versions, it does offer plenty of cool new features / improvements for business users to get excited about.

So here are the top 10 in no specific order.

  1. Cloud First: while SharePoint was part of Office 365 for some time now, it was a limited experience. SP13 is promising the full experience in the cloud + regular release of improvements and enhancements.
  2. The Newsfeed: taking the best from Facebook and Twitter, the new Newsfeed is the centerpiece of SP13 social push. The foundation was there in SP10 but you needed an external component like NewsGator to make it work. Now you’ll be able to build your network, follow colleagues and post / search the newsfeed at different organizational levels. #hashtags for all! For more…
  3. Communities: the other new social feature is the ability to create communities. A community (as separated from a project team) is for getting a group of people to collaborate more freely around a topic and share expertise. Built around Discussions, it expands them into seeing members, their contributions and allows easy formation of expert communities. For more…
  4. Cross site publishing allows for the first time to share content across sites, site collections, applications and even farms. We built a custom solution for this for an insurance company that wanted to post new forms to the public site, Agent portal and Intranet in a single action. Now it is built in. For more….
  5. Search had received a major upgrade. The acquisition of FAST was finally integrated into the main SharePoint search resulting in a long list of great improvements such as: Search for conversations, videos and reports, visual results and in-page previews, context sensitive sorting, advanced filters and of course, better performance, API’s etc. For more…
  6. SharePoint Apps!: one of the major changes to SP13 is the concept of apps. Apps are just like they sound, web applications that can be packaged so users can add them to pages or use them from within SharePoint. Not that different from the concept of solution packs before (line the Famous Fab 40 that were discontinued in SP10..) of packaging your web app in a web part. The new model does have a few advantages. It gives users more control on apps to use and while IT can still approve apps, they do not need to install them for users. It can also make internal applications easier to find and reduce redundancy. For more on apps see the Microsoft SharePoint apps blog.
  7. Simple project / task management: for complex project management you still have project server but it is an overkill for most simple projects. The new team site template includes the ability to manage tasks, deadlines and a simple work breakdown structure for a project team. It generates a personal and a group view of tasks and timelines perfect for keeping everyone on time. For more.,..
  8. Enterprise eDiscovery: one of the essential requirements for ECM in this age is a good eDiscovery mechanism to ensure content related to litigation or information requests can be executed efficiently and across all information repositories. SP13 is adding a new eDiscovery center that would make this a lot easier. For more…
  9. New Usage Analytics and useful views: Microsoft is replacing the SharePoint analytics with 2 new tools: search analytics and usage analytics. Usage analytics provide more detailed view of how SharePoint is used and even better, adds up to 12 cutom events to be added and tracked without custom tagging. You can also use the data collected from these tools for useful views such as Most Popular, Popular Searches ect. For more ..
  10. Better support for digital assets: there is no longer a need to create a special media library for digital assets. Once enabled, audio, video and other rich media can be added to any library. For more…