Tag Archives: mapping

Out with the Old, In with the New

Why does this take place?
While new projects often get the most visibility, you also know that maintaining what is already in place is as essential as the new projects.  Delays in responding to customer changes or adapting to new versions of software can be a serious impact to your business.  ERP cut-overs, changing customer/vendor document formats and industry-mandated document changes (e.g. HIPAA) cause changes to our stable integrations.  These changes can cause business disruption or penalties if they cannot be dealt with effectively and efficiently.

How has it been dealt with in the past?
The brute-force way to deal with changing document requirements is to clone your maps and make the (hopefully) minor adjustments.  This approach works, but does not scale.  Consider a company with 10 trading partners.  Occasionally, a partner will upgrade their EDI maps to a new version.  This rate of change can be incorporated into the work schedule with relatively little impact.  Now, consider the company with 1000 or 5000 trading partners and an industry-wide mandate (such as HIPAA) is coming to bear.  Very quickly, the company could be faced with updating thousands of maps.  This effort would be measured in multiples of person-months. Continue reading

It’s All About the Model

For quite sometime, software developers have known that working with models is a much more efficient way to get a job done than by working at a lower level in the minutiae. One of the reasons for this is that people who develop meta-models have figured out how to hide details that can be derived, or to classify similar problems so that they can be expressed in an easier-to-understand way.

Consider, as an example, the problem of working with XML elements.  If I was going to write a program to read an XML document, I might make some assumptions about the cardinality of a particular XML element and, if appropriate, place a loop in my program that accounted for multiple occurrences of the element. However, that’s not the only way to do it. Alternatively, I might determine the cardinality of a particular element by inspecting the constraints of the element in an XSD.  What’s happening in this alternative approach is that I’m moving towards using a model (the XSD) to figure out what to do, instead of hand-coding the logic.  This is somewhat of an over-simplified example, but I think it illustrates the point that working with models eases the burden (of maintainability) of dealing with particular instances.

This idea of modeling as a way of solving problems is something we, at EXTOL, have been convinced of for a long time.  The benefits are far reaching both for us (as a software developers), and for our customers.  The trick here is in developing a meta-model (or language) that allows the user to express what they need in order to get the job done.  Representing the model to the end user in useful way is also a challenge.  I’ll talk more about that problem in a future blog.
In our world then, the model is the central thing. Our users build them (via visual representations…GUIs), we generate and compile from them, we inspect them for patterns, we create new instances of them and we even decorate or extend existing ones.  It all starts with the model, and nothing happens “downstream” that isn’t expressed (in some form) in the model.  Our models even refer to other models.

Our users build models to express the structure and attributes of metadata (think “a model that describes an XSD, Database, EDI, as well as a Spreadsheet or a Flat file”).  We call these metadata models “Schemas” (not to be confused with “database schemas” or W3C Schemas).  Our users also build models that express what to do when transforming one instance of a metadata model to another.  We call these transformation models “RuleSets.”  Since one of our goals has been to enable transformation without programming, we have spent a lot of time thinking about how to model the clever things that people want to do in their transformations.  The languages of these models continue to evolve over time.

In terms of our Transformation model, a rule can be thought of as something that describes some relationship between the source and target.  But not only that, a rule might also exist to help describe the sequence of execution (where that might matter), doing iterations on collections of instances of data, encapsulating other rules, or raising asynchronous events.  Other more common examples of Rules are simple data manipulation routines such as move, substring, and concatenate, while expressions such as “For Each Database Row, Create a New XML Element” are also just as common.

Now, what does all of this have to do with design-time automation and our Smart Mapping feature as an expression of it?  In short, because we have committed to and use modeling extensively, we are able to make our features do far more than one might expect at first glance.  We often get the reaction from prospects that, “I didn’t even think that was possible.”  That reaction still surprises me when I hear it.

The Smart Mapping feature in EBI produces (generates) rules in the RuleSet model, which (after the user approves) are indistinguishable from rules that might have been created manually.  This is significant because often times in this area, what is generated automatically is something that either can’t be maintained, or is so cryptically represented that it’s impractical to maintain it.

Another significant challenge in this area is the ability to control the scope of what an assistive mapping tool is looking at.  At one extreme, we find tooling that will generate instructions at the gross level (e.g. entire document to document) and then leave it up to the user to “clean up the mess.” This has some utility (and the EBI Smart Mapping tool can certainly do this), but I think the real value is in allowing the user to set the scope arbitrarily.  So for example, you can limit the Smart Mapping routines to looking at a single database table and how it might map to an EDI Segment.  This is important because most of the work that happens in the mapping area is maintenance, not starting from scratch.

Rulesets: Much More Than Data Movers

As data exchange grows at an explosive rate, moving data between formats and systems occurs in many ways.  Common data transformations involve data format conversions between EDI, XML, flat files, databases and more recently spreadsheets.  In an application-to-application (A2A) environment, data typically is converted between interface files and back-end databases such as ERP systems.  In the EXTOL Business Integrator, we call the objects that facilitate these movements Rulesets. Continue reading

What is “Smart Mapping”?

In my two previous posts on the subject of automated mapping, I examined the importance of automation as a way to reduce design-time integration costs and delivery time, and the challenges of applying automation to the mapping process, in particular.

If you’ve used or seen a demo of our original EDI Integrator product for System i, you know that EXTOL has a long history of innovation in this area.  In 1994, we introduced an automated source-target matching feature called “Automapping”, and three years later, the “Advanced Automapping” feature debuted, giving customers the ability to generate large portions of new maps, based on previous mapping examples.

Two years ago, we renewed our research into automated mapping methods, with the goal of delivering a new automated mapping implementation for EXTOL Business Integrator, the modern, cross-platform integration broker we introduced in 2003.  But instead of simply replicating the Automapping and Advanced Automapping features, we set out to push the boundaries of state-of-the-art mapping automation, by targeting a much more stringent set of requirements:

  • Mapping must be applicable to any mapping situation, including both familiar and unfamiliar document types and source-target combinations
  • Both aspects of mapping – source-target element matching and generation of data transformations – should be supported by the automated mapping mechanism
  • The automated mapping feature should be able to “learn” from and adapt past decisions to future mapping situations, incorporating best practices that evolve, over time
  • Automated mapping should integrate unobtrusively in the UI, allowing a blending of human and automated mapping actions and results
  • Users must retain control over the application of results generated by the automated mapping feature, with the ability to selectively apply generated matches and transformations
  • The behavior of the automated mapping feature must be configurable to suit different business circumstances and user preferences

These requirements went far beyond the capabilities of available automated mapping implementations, and called for the invention of a new mapping architecture and new automated mapping methods.

The resulting implementation, “Smart Mapping”, was introduced in EXTOL Business Integrator v2.5, in October of 2010.  Smart Mapping is an automated mapping implementation that integrates with the EXTOL Integration Studio Ruleset Editor, the drag-and-drop, rule-based mapping tool introduced originally in 2003.  It consists of multiple automated matching and rule generation methods that are combined through a user-weighted fuzzy logic layer, and can generate mapping results at the element, structure, and document root levels.

What makes Smart Mapping stand out from past attempts to automate mapping is the ease with which powerful mapping methods can be brought to bear on virtually any B2B, data, or application mapping situation, without imposing new skill requirements or intruding on the user interaction model.

We believe that Smart Mapping is not only a boon for companies that need to deliver sophisticated integrations faster and more cost-effectively, but is also interesting and important technology, in its own right.  Over the coming weeks, we will be posting additional insights into the Smart Mapping approach and the technology that underlies it.  Stay tuned.

Data Transformation Mapping – Can it be Automated?

In my previous post on this subject, I wrote about the neglected problem of data transformation mapping productivity, and its impact on integration project costs, particularly for B2B integration in companies with many customer relationships (and therefore many documents pairs to map).

Although attempts have been made to automate aspects of mapping, it remains largely a manual activity.  Gartner and other industry analysts have published research on automated mapping methods, particularly Dictionary-based mapping, but real-world implementations haven’t put a serious dent in the mapping productivity problem.  Why is that?

Mapping is a large and complex problem space.   It accounts for the single largest part of integration time and effort, encompassing data validation, translation, value derivation, enrichment, aggregation, and routing.  Defining these outcomes involves two kinds of specification, not one:  source-target matching and transformation.  Most attempts to automate mapping have focused intensively on the first part (matching), but very little on the second (data transformations).  But the biggest obstacle to mapping automation is that specifying the right mapping action may require an understanding of business and data contexts and requirements.  And that requires human decision-making.

So to the degree that mapping automation is possible, it must occur in the context of a broader, human-guided mapping process.  Simply defining a source-target dictionary and “lighting the fuse” won’t produce a map that can be used in a production environment.

Integrating automated mapping methods with the human-guided mapping process imposes critical implementation requirements for integration technology providers, including:

  • Unobtrusive integration of automation methods and human mapping decisions in the UI
  • Support for both source-target matching and data transformation aspects of mapping
  • Automation that works with both familiar and unfamiliar document types and combinations
  • Results that can be verified quickly and easily by humans
  • Configurability to suit different circumstances and preferences
  • Ability to “learn” from and adapt past decisions to future mapping situations

Each of these requirements could be subjects for their own blog posts.  The bottom line, however, is that past attempts to automate mapping have ignored or fallen short in all of these areas.

Automating data transformation mapping isn’t easy, but it is possible.  The next post in this series will examine the technology solution strategy taken by EXTOL, with the Smart Mapping feature introduced in EXTOL Business Integrator v2.5.