Tag Archives: mapping

Data Transformation Mapping – Is it a “Solved Problem”?

Mapping.  We all know how important it is to all kinds of business integration – it defines the data result of an integration process.  Despite the apparent simplicity of drag-and-drop mapping tools, we also know what a pain it can be to get right.  And how the time required to specify and test data transformation maps contributes to the cost of business integration.

Based on years of implementation engagements, EXTOL has determined that mapping can account for up to two-thirds of the time in a medium-to-large project.  The percentage is highest in large conversion projects conducted by companies with many customer relationships.  And the percentage is lower in smaller projects (like onboarding a single partner) and in companies that own value chains and have the power to set the terms of integration with partners.  But even in smaller projects, mapping is usually the single most time-consuming – and costly – design-time activity.

Despite the dominant influence that mapping exerts on life cycle costs, the state-of-the-art for mapping technology has stalled at the classic “line mesh” mapping model, introduced during the early 1990s.  At that time, the simplicity with which data associations could be specified by simply drawing lines between source and target documents was hailed as a marked improvement over coding (which it was), but the line mesh model did not deliver dramatic productivity improvements for complex mapping cases, in which hundreds or thousands of associations and transformations must be specified.

While industry experts concede that the cost of integration remains an important issue, the most time-consuming and costly integration activity – mapping – is treated by most technology vendors as a solved problem.  With the exception of minor advances in user interface design, the problem of mapping cost and productivity has been largely ignored.

The irony here is that integration technology has played such an important role in boosting business productivity.   Automating B2B, application, and data integration processes has enabled businesses of every kind to reduce processing costs, increase business throughput, and reduce errors.  Yet the technologies that have enabled those improvements suffer from productivity deficits of their own.

In my next post in this series, I’ll examine ways in which automation can be applied to data transformation mapping, and how different automation approaches produce very different outcomes.

Best Practices for Mapping: Naming and Usage

Successful EDI implementations must begin with the development and employment of proper naming conventions and best practices. Too often a company will make hasty decisions in an effort to implement an EDI customer quickly only to find that good practices were not put to use and the bad habits are then continued for all remaining EDI implementations.

When mapping, it is important to develop and standardize in-house naming conventions that can be used for all maps. This will help to identify maps and keep them organized.  One recommendation is to use some variation or reference to the trading partner the map will be used for. Other suggestions may include using the document type, version, or division references in conjunction with the trading partner name. By using some of these methods in your map naming you will make it easier to identify the purpose of the map. Continue reading

To SDQ or Not to SDQ

Does the handling of the SDQ segments within your EDI, applications and business practices puzzle you?  Well, you’re not alone. SDQ segments have been around for a while now, but still seem to cause havoc for IT staff!

SDQ segments occur in several message types, including 852 and 860, but the most common message would be the 850 Purchase Order (PO).  The traditional PO included one Ship-To location per order and the layout was very similar to the document itself.  The purchase order layout included a header (PO number and date, purchaser name, address, etc), detail (item, quantity, and price) and a summary.  The disadvantages to the easy layout were the redundant use of information and the large batches of these documents.

To capture this data, your application system most likely included two files: header/summary and a detail using a unique key relationship.  The formatting was easy to read, having only one ship-to per order.  The traditional PO structure made it simple for applications as well, allowing easy-to-create formats in file setups.

The original reason for the SDQ segment was to decrease the number and the size of transactions by allowing multiple ship-to locations per order.   On a purchase order SDQ segment, it lists quantity per location and locations. In the P0102 element there is the total quantity.   The SDQ03 holds a location number and the SDQ04 lists the quantity for the location in the SDQ03. The SDQ05 and 06 follow the same pattern which is repeated up to SDQ22.  Hence “pairing” indicating location and quantity pairing, the segment contains multiple “pairs” for a specific item noted in the PO1 segment. There can be up to 500 SDQ segments containing ten “pairs” per segment.  Changing the traditional PO to a PO with an SDQ allowed partners to include more than one order per transaction.  It also accomplishes what it set out to do: decreasing the size of the orders and removing the repeating data.  However, creating new advantages creates new disadvantages.  Some of the disadvantages include the assumption that the location address information can be referenced by the user’s application. The SDQ multiple locations format doesn’t include address information for these locations.  Also, SDQ standard currently does not state how many “pairs” are required per segment.  You may have many SDQ segments each having a varying amount of “pairs” on each line. These disadvantages may not sound like major issues, but just ask the IT staff of other companies who needed a solution (sometimes very quickly) on how to capture this newly formatted data and the havoc it can cause.

Here in lies the question: what would you do if you receive a PO destined for multiple locations and currently you are configured with 2 files: a header/summary and detail?  Do you create additional fields for every pair in the existing detail file?  How many fields should be created?  How are the blank pairings handled within the application?  How will your application now respond to multiple detail lines in the same order?

Your translator and application will ultimately determine the route to take in handling multiple orders within an SDQ order.  However, it may be advantageous for you to look into creating a new SDQ file and not an extension of the detail file.  This would allow the writing of records on the PO1 for detail information and SDQ data to the SDQ file.  Verify if your current translator can write a new record for EACH pairing, ultimately eliminating the possibility of writing blank pairs to your application.  Keep in mind that in order to satisfy the needs of your application you still may need some type of post process.  This post process would handle application requirements, such as manipulation of quantities to even breaking out each SDQ pair into individual purchase orders.  Happy Mapping!

Why System Maintenance and Disaster Planning Matter

When it comes to solving data integration problems, we often overlook the system-level impacts of proliferating integration objects, increasing data volume, and increased logging activity. The tendency is to focus on delivering applications and to ignore broader consequences that could come back to bite us.

As an EXTOL developer who interacts with our front line support representatives several times a day, I have seen, first hand, the perils of poor system maintenance and not having a disaster recovery plan. There are three important points that need to be considered, to keep the ship sailing smoothly and to mitigate extended periods of downtime, should a disaster occur.

Continue reading

Functional Acknowledgment Reconciliation

If your company uses Electronic Data Interchange (EDI), reconciling Functional Acknowledgements, or “FAs”, or “997s” is important and often overlooked. Just because the FA is a small document that is automatically generated from your translator, doesn’t mean that it should be ignored.

The FA provides information on problems found with the structure or content of EDI transactions, such as a mandatory segment or element missing, or invalid code. It isn’t just a data receipt. So, FA reconciliation should not just ask the question, “Did I get an FA?” It should also ask whether the document has been validated. There are four validation statuses: A – Accepted; E – Accepted with Errors; P – Partially Accepted (at Least One Transaction was Rejected); and R – Rejected.

Your reconciliation process should be checking for Rejections, at a minimum, but also for Error and Partially Accepted states. Those states may reflect a change in standards or in your partner’s requirements that you haven’t taken into account or may not be aware of. They can also mean that you have provided incomplete or incorrect data. If E, R and/or P statuses are received on a regular basis, it may be time to check your mapping and application data for those transactions.

Ignoring the Error and Rejected statuses may be costly to your company, as those EDI transactions may not be fully processed by your partner.

So, remember the next time you consider the humble FA: reconciliation means more than, “Did I get an FA?”