Category Archives: Integration Architecture

Canonical models, not just your ordinary middle-man

Deciding how to integrate application data in a way that limits the impact of application changes can be a serious challenge.  Direct, point-to-point integrations are efficient and can be implemented rapidly, but they ignore the notion of interface resiliency.  To complicate matters, modes of data transport such as files or messages and whether the data crosses business domain boundaries expand the complexity of “normalizing” the information.

These challenges occur in both the Business-to-Business (B2B) and Application-To-Application (A2A) domains.  In the B2B space, several data “standards” have emerged to help coordinate interoperability efforts such as ANSI X12 and UN/ECE EDIFACT from the EDI world and RosettaNet for XML content. Integrating to such standard formats limits the impact of application-level changes made by either partner. Continue reading

One Foot on the Ground, One in the Cloud

With more mission-critical applications such as CRMs and ERPs moving to the Cloud, the integration landscape is changing.  But what’s the degree of change?  Is it really changing that much or have the mediations (services that connect the internal and external processes) simply changed to support different data packaging, communication protocols and activation methods while continuing to perform the same functionality?  If your  integration toolset can’t adapt to these changes, you’ll have to  rebuild your integrations.

Let’s consider current real-world  use cases: integration using a premise-to-Cloud implementation and integration within the Cloud such as Cloud-to-on-premise or Cloud-to-Cloud.  Even though the endpoints are different and cross system boundaries, the base mediation involved is simple: Continue reading

Using Spreadsheets to Automate Supply Chain Integration

Many Business-to-Business (B2B) relationships choose to initiate transaction exchanges at the point a buyer sends an electronic purchase order to a supplier.  This Purchase Order is often in the form of an EDI 850 document.  This Purchase Order often then triggers the supplier to send the buyer an electronic Invoice (EDI 810) and an electronic Ship Notice (EDI 856), for the goods ordered.  This “business process” represents the minimum number of transactions required to maintain this successful model.  Moreover, there exists an abundance of data in support of this process, although it may often fail to be included in the production model.  The cost of integration may exceed the return and, as a result, this data either isn’t shared between the two organizations or it fails to be integrated to a usable state.

For example, at the suppliers’ front end is Catalog data (EDI 832).  This provides an opportunity for a supplier to identify the attributes of their products to the buyer, creating a more-efficient ordering process.  At the buyers’ backend is Product Activity / Point of Sale data (EDI 852).  This provides an opportunity for buyers to report their sales activity, aiding in the ability to forecast the production of goods.  While this data is not essential to the “Purchase Order <-> Invoice” model, it provides the ability to “enhance” production processes.  Additional transactions, such as Purchase Order Acknowledgment (EDI 855), Purchase Order Change (EDI 860), and Payment Advice (EDI 820), are a few of the many electronic formats capable of improving and furthering the overall business process integration. Continue reading

Smart Mapping: Matching and Rule Generation

Ok, so you recently read a few posts about automated mapping, specifically Smart Mapping and you’re thinking to yourself, “Awesome!  I love it when computers do useful things for me and make my life easier… But what does Smart Mapping actually do?”  Well, the simple answer to that question is it does two things, matching and rule generation.  Now you’re thinking “Oh, right….  Well, what does THAT mean?”  Fear not, I will explain.

But before I do, it is important to understand that Smart Mapping is initiated from and runs in the EBI Ruleset Editor.  Those of you who are familiar with the EBI Ruleset Editor already know what rules are and their role in mapping.  Those of you who are not familiar with the EBI concept of rules and the Ruleset Editor may want to refer to Greg Inns’s scintillating blog “It’s all about the Model”.  In it, you’ll find explanations of what rules are and how EBI uses them to get data from source to target. Continue reading

It’s All About the Model

For quite sometime, software developers have known that working with models is a much more efficient way to get a job done than by working at a lower level in the minutiae. One of the reasons for this is that people who develop meta-models have figured out how to hide details that can be derived, or to classify similar problems so that they can be expressed in an easier-to-understand way.

Consider, as an example, the problem of working with XML elements.  If I was going to write a program to read an XML document, I might make some assumptions about the cardinality of a particular XML element and, if appropriate, place a loop in my program that accounted for multiple occurrences of the element. However, that’s not the only way to do it. Alternatively, I might determine the cardinality of a particular element by inspecting the constraints of the element in an XSD.  What’s happening in this alternative approach is that I’m moving towards using a model (the XSD) to figure out what to do, instead of hand-coding the logic.  This is somewhat of an over-simplified example, but I think it illustrates the point that working with models eases the burden (of maintainability) of dealing with particular instances.

This idea of modeling as a way of solving problems is something we, at EXTOL, have been convinced of for a long time.  The benefits are far reaching both for us (as a software developers), and for our customers.  The trick here is in developing a meta-model (or language) that allows the user to express what they need in order to get the job done.  Representing the model to the end user in useful way is also a challenge.  I’ll talk more about that problem in a future blog.
In our world then, the model is the central thing. Our users build them (via visual representations…GUIs), we generate and compile from them, we inspect them for patterns, we create new instances of them and we even decorate or extend existing ones.  It all starts with the model, and nothing happens “downstream” that isn’t expressed (in some form) in the model.  Our models even refer to other models.

Our users build models to express the structure and attributes of metadata (think “a model that describes an XSD, Database, EDI, as well as a Spreadsheet or a Flat file”).  We call these metadata models “Schemas” (not to be confused with “database schemas” or W3C Schemas).  Our users also build models that express what to do when transforming one instance of a metadata model to another.  We call these transformation models “RuleSets.”  Since one of our goals has been to enable transformation without programming, we have spent a lot of time thinking about how to model the clever things that people want to do in their transformations.  The languages of these models continue to evolve over time.

In terms of our Transformation model, a rule can be thought of as something that describes some relationship between the source and target.  But not only that, a rule might also exist to help describe the sequence of execution (where that might matter), doing iterations on collections of instances of data, encapsulating other rules, or raising asynchronous events.  Other more common examples of Rules are simple data manipulation routines such as move, substring, and concatenate, while expressions such as “For Each Database Row, Create a New XML Element” are also just as common.

Now, what does all of this have to do with design-time automation and our Smart Mapping feature as an expression of it?  In short, because we have committed to and use modeling extensively, we are able to make our features do far more than one might expect at first glance.  We often get the reaction from prospects that, “I didn’t even think that was possible.”  That reaction still surprises me when I hear it.

The Smart Mapping feature in EBI produces (generates) rules in the RuleSet model, which (after the user approves) are indistinguishable from rules that might have been created manually.  This is significant because often times in this area, what is generated automatically is something that either can’t be maintained, or is so cryptically represented that it’s impractical to maintain it.

Another significant challenge in this area is the ability to control the scope of what an assistive mapping tool is looking at.  At one extreme, we find tooling that will generate instructions at the gross level (e.g. entire document to document) and then leave it up to the user to “clean up the mess.” This has some utility (and the EBI Smart Mapping tool can certainly do this), but I think the real value is in allowing the user to set the scope arbitrarily.  So for example, you can limit the Smart Mapping routines to looking at a single database table and how it might map to an EDI Segment.  This is important because most of the work that happens in the mapping area is maintenance, not starting from scratch.