Archive

Posts Tagged ‘data’

End to End File Governance

April 9th, 2014 No comments

When thinking about data movement, security, traceability and the industries that need these capabilities, I immediately am drawn to the financial and health-care sectors.  However, that is changing.

Having the ability to govern your data from the time it crosses your “corporate border” through its lifecycle and ultimate delivery to an external consumer is becoming a critical business function.  No longer the domain of expensive infrastructure, it is now possible to implement architectures that ensure guaranteed delivery of data in a secure manner.

But what about monitoring and tracking the data while it is within your borders?  Important questions that need to be answered include “Has the data been accessed and by whom?”, “Has the data been modified while at rest?”, and most importantly, “Where has the data been after it arrived?” This is mission-critical because we consume data in many ways while it is under our control.

End-to-End File Governance can address these very questions in a platform-independent way by monitoring data in motion and at rest.  Think of it as offering a GPS tracker for your data; monitoring everywhere the file has been and who has interacted with it.  

Is Data Mapping a Solved Problem?

September 20th, 2012 No comments

Many IT professionals believe that data transformation and mapping are “solved problems”.  After all, mapping tools have been around for over 20 years, and thousands of IT organizations use them in integration projects every day.  “If it ain’t broke, don’t fix it”, right?

What belies that attitude are the missed integration project deadlines, runtime exceptions, customer chargebacks, vendor scorecard deductions and other business problems that can be traced to data transformation mapping practices.  Mapping is also the single most costly integration activity, accounting for up to 75% of some integration project costs.  Yet few project teams focus attention on ways to improve mapping efficiency and accuracy. Read more…

Data transformation versus data translation

June 12th, 2012 No comments

When talking about integration, whether it is application to application (A2A), business to business (B2B) or data and cloud integration, the goal is to exchange data, documents, information, and/or messages between a source and a target.  To accomplish this goal, we identify business patterns related to the sources and targets of these “integrations” and then apply the identified patterns to similar scenarios that meet specified criteria.  Most, if not all of these patterns will require data mappings, also known as “maps” or “rulesets”, at some point during the integration process.  These maps are ultimately used to perform one of two operations: to transform or translate data.  I often use the terms, transform and translate, interchangeably.  But, is there more to them?  Are these terms synonymous, or do their meanings differ?

Let’s look at this a little closer to see if there is a difference.  Data translation can be defined as the process of converting data from one syntax to another and performing value lookups or substitutions from the data during the process.  Translation can include data validation as well.  One example of data translation is to convert EDI purchase order document data into purchase order database files or even flat files while performing data validation on the source data. Read more…

Dealing with Black Box Integration

December 15th, 2011 No comments

Some of the most difficult challenges that integration projects face are what I like to refer to as “artifacts of black-box implementations.”  These artifacts include incomplete specifications (both from external sources such as vendor specification or internal sources due to a lack of understanding of the interrelationships between systems), opaque processes from lack of visibility and undocumented integrations.  Opaque processes and undocumented integrations present the greatest challenge because necessary information is not available due to ineffective communication, absence of the original source (often a single person) or integrations with legacy systems where there is no more vendor support.  Most processes enabled by commercial applications are opaque. If you can’t determine (because it’s undocumented) (a) the identities and locations of all data that is added or updated by the process; (b) the rules under which data changes and additions occur; and (c) the identities and locations of all downstream applications, modules, and services that are invoked, then the process can be considered, to some degree, opaque. Read more…

What’s the Big Deal with Spreadsheets? Part 2

July 1st, 2011 No comments

In my last blog entry here, I started a discussion of the technical challenges of using Spreadsheets as electronic business documents.  I want to continue that discussion, but instead of focusing on how data is located predictably in the spreadsheet, I want to talk about separating the formatting from the business data.

As you might imagine, consuming Spreadsheets and producing them presents two different sets of challenges.  When consuming them, the problem of locating the data in the spreadsheet predictably is the most significant challenge, next to data content concerns.  The formatting (fonts, colors, shading, graphics, etc.) generally does not matter when consuming the Spreadsheets, whereas when producing them, it can matter almost as much as the data content. Read more…