When data is being moved from one business application to another using automated processes, there are a host of issues that can crop up and inhibit your organization’s ability to conduct business. The best way to control the flow of data and ensure that both source and target databases do not become corrupt is by applying data management rules.
Data management rules are a set of steps that govern when, where, why and under what conditions the data should flow between the systems and how the data must conform so that it can be properly placed into the target system(s). Without data management rules, data would flow from one system to another like water over a road, continually washing out historically accurate and validated data with new unvalidated data.
The five considerations described in this eBook are those which distinguish companies that apply best practices when it comes to data management. An organization that applies the best practices with data management will have a competitive edge over their peers.
The most important consideration when managing data between two or more systems is how to control for duplicates. Duplicates wreck havoc on reporting and make making appropriate business decisions impossible. Without proper data management rules, automated systems can quickly flood a once carefully curated database a mess of incomprehensible data. The best way to reduce the volume of duplicates when integrating two or more systems is to have a set of merge rules that will check new and incoming data with the established database and determine if the new data is indeed new or only partially new with some updated additions.
Depending on how your database is set up will determine the number of levels your organization would consider having merge rules on. For example, merge rules could be applied on two different fieldset; companies and individuals. On the individual merge rules would be look at potentially unique identifiers such as email, first and last name, or IP address. As well, merge rules could be a composite of several different criteria with potentially with fuzzy logic around applied to fields where typos and mistakes often occur. On the company level merge rules helps funnel multiple individuals from the same organization into the same organization as opposed to creating several duplicate organizations.
Data moving from one system to another is the second largest source of potential errors, overwriting data, and duplicate creation. Being able to appropriately control when, why, and under what conditions data should flow between systems through the use of filters will enhance a company’s ability limit the chaos of data overload that may occur.
An example of why data flow matters would be with the service level agreement (SLA) between sales and marketing. Marketing leads should flow into the CRM only at a time when sales reps should take action on them such that they have become marketing qualified (MQL). If leads flow to the CRM prior to being marketing qualified, sales reps typically end up wasting time on protracted sales cycles on leads that are not ready to move forward. This creates distrust between these two critical revenue departments. Concomitantly, when a sales rep has run their course with the lead that won’t convert they should have the ability to push the lead back to marketing so the prospect can be enrolled in the proper campaigns and brought back to sales when they are ready.
Unless an organization has a robust business application package that manages every aspect of the business, the business critical data will need to pass between two or more systems to provide a 360 degree view of the customer journey. As all business application platforms are engineered differently they have different data requirements for specific and required fields.Trying to force data i
nto a field when it is not formatted properly can at the very least result in errors, and in more severe situations, stop the data integration completely. Phone numbers, dates, number of decimal places, and field size limits are all examples of how one system will store the same data differently from another business application.
Even when fields do not have hard rules about regarding required formatting, reformatting can help the business. This is particularly the case when a someone enters a form and puts their first and last name all in lowercase or when a phone number is entered without dots, dashes, or parenthesis. In both cases reformatting will help the business to send content with properly formatted names for the former and for the latter, make reading phone numbers easy to read for tele-prospectors.
Data transformations are one step more complex from data reformatting, however layering multiple data transformation rules as the data moves between systems enables organizations to cleanly organize and manage their data in a way that will keep both business application platforms organized and save their end users valuable time.
Data transformations can be as simple as a lookup where one mapped field results in adjustments to several associated fields, to more complex transformations where fields can be split or combined dependent on a specific set of rules, or even conditional transformations again dependent on a specific set of rules. Examples of data transformation might be where a customer selects several product lines in the e-commerce platform however the marketing automation engine only requires a sum each order.
Again, as all business application platforms are built differently they more often than not have different data requirements for specific and required fields. Many times a simple translation between the two systems works far better and faster than complex re-formatting and transformation rules. Most often data translation is a set transformation from one data type to another. It is most powerful when managing data between two systems that have vastly different roles but still need to share a common dataset, like a marketing automation engine and an accounting system.
Data translations take the form of a translation table typically with a one to many relationship in one direction. The reason for a one to many is that if the source data will be mapped to multiple targets, each target may have different translation requirements. If the data moves bi-directionally, multiple translation tables would be involved for translation of the data in each direction.
Mismanaging the flow of data between two or more business applications can wreck havoc on the integrity of the data within those databases. Without the proper care and management of the flow and management of the data business system integration can cripple a company’s ability to report and therefore ability to guide the company to success.
A data management solution must be capable of reducing duplicates, filtering, transforming, and translating data between connected systems. Companies that follow proper data management rules have a competitive edge within their marketplace.
Vertify is a universal data management application that connects with over 80 different SaaS products using endpoints. Once connected to Vertify, organizations gain API developer level control of their connected systems without writing code in an easy to use straightforward drag and drop interface. Once the endpoints are connected organizations can leverage Vertify’s powerful data management toolset.
Mark Shalinsky, PhD, has spent his life living in data. As an academic he wrestled with managing huge data files trying to understand the correlation between blood and neuronal activity. In the private sector Mark worked in sales operations managing and synchronizing large datasets in an effort to identify sales and marketing sweet spots.