Many IT professionals believe that data transformation and mapping are “solved problems”. After all, mapping tools have been around for over 20 years, and thousands of IT organizations use them in integration projects every day. “If it ain’t broke, don’t fix it”, right?
What belies that attitude are the missed integration project deadlines, runtime exceptions, customer chargebacks, vendor scorecard deductions and other business problems that can be traced to data transformation mapping practices. Mapping is also the single most costly integration activity, accounting for up to 75% of some integration project costs. Yet few project teams focus attention on ways to improve mapping efficiency and accuracy.
But it’s not just a cost issue that argues for action. It’s also how critically important data transformation has become in businesses and industries of every stripe. Data transformations occur in virtually all B2B, application, data, and cloud integration processes. For example:
- Transforming inbound EDI orders sent by a customer to an ERP-specific import file format
- Transforming invoice data posted by an ERP into XML payloads transmitted to a web-based e‑invoicing service
- Transforming availability data from an inventory database into a spreadsheet format, for email distribution to small customers
- Migrating customer master data from a legacy CRM database to a new CRM system load format
- Generating healthcare provider eligibility responses by transforming inquiry turn-around data with inputs from a subscriber database
Data transformations make it possible to exchange information between applications, services, and trading partners that were not designed to work together. They accomplish this by bridging the native data formats and content semantics used by sources and targets in an interface, and by performing intermediate conversions in a way that is invisible on both sides. That means a business can automate data interchanges between unlike systems and resources “non-invasively” – without making changes to the source or target.
There’s a natural tendency to ignore both the value of data transformation and the problems that can result from flawed mapping strategies, not just because so many of us view mapping as a solved problem, but also because data validation, dis- / aggregation, data enrichment, content-based routing, state management, and additional data transformation functions are buried beneath other integration layers. We see data transformation problems as part of a larger aggregate, but often miss the point that we can solve or avoid them by changing our approach to mapping.
Improving the efficiency and accuracy of data transformation mapping has been an important EXTOL mission since we became one of the first ISVs to introduce automated mapping support in the mid-90s. Since then, we’ve broken more new ground with Smart Mapping™, a pattern-driven assisted mapping feature in EXTOL Integration Studio, and more recently the Integration Pattern Repository™, a pattern-based solution for sharing mapping best practices that eliminates the overhead and limitations of conventional map reuse.
We’ll be posting more thoughts on advancements in data transformation mapping during the next few weeks, but we also want to hear from you. So post a response or drop us a line at firstname.lastname@example.org if you have a comment on this topic or would like to hear more about a particular mapping problem area or approach. And stay tuned to the Resource Center on the EXTOL website for additional material on this subject.