Requirements Gathering is a complex process with the purpose of defining a list of capabilities that we expect to meet, once the project is completed. Given that very “business-y” definition, let’s explore the process of Requirements Gathering and how it relates to Integration projects. In this blog, I will start to address the approach I use to gather requirements and then we’ll apply that approach in subsequent blogs to use cases.
Simply creating a punch-list of features/capabilities/behaviors is not enough to fully define the requirements of an Integration project. Just meeting the defined expectations of A, B, C and so on requires a much deeper dive into what is truly going on, both within the organization and externally with other integration partners (customers, vendors and industry consortiums).
The Requirements Gathering process usually starts when a project is created. In formal organizations, this initiates with the approval of a project charter, identifying a project Sponsor. In smaller organizations, this can be either a tactical (reactive) or a strategic (forward-thinking) effort directed by a department head or line-of-business manager. Once the project is initiated, the scope of the project is usually defined and documented. Continue reading
When thinking about data movement, security, traceability and the industries that need these capabilities, I immediately am drawn to the financial and health-care sectors. However, that is changing.
Having the ability to govern your data from the time it crosses your “corporate border” through its lifecycle and ultimate delivery to an external consumer is becoming a critical business function. No longer the domain of expensive infrastructure, it is now possible to implement architectures that ensure guaranteed delivery of data in a secure manner.
But what about monitoring and tracking the data while it is within your borders? Important questions that need to be answered include “Has the data been accessed and by whom?”, “Has the data been modified while at rest?”, and most importantly, “Where has the data been after it arrived?” This is mission-critical because we consume data in many ways while it is under our control.
End-to-End File Governance can address these very questions in a platform-independent way by monitoring data in motion and at rest. Think of it as offering a GPS tracker for your data; monitoring everywhere the file has been and who has interacted with it.
Many IT professionals believe that data transformation and mapping are “solved problems”. After all, mapping tools have been around for over 20 years, and thousands of IT organizations use them in integration projects every day. “If it ain’t broke, don’t fix it”, right?
What belies that attitude are the missed integration project deadlines, runtime exceptions, customer chargebacks, vendor scorecard deductions and other business problems that can be traced to data transformation mapping practices. Mapping is also the single most costly integration activity, accounting for up to 75% of some integration project costs. Yet few project teams focus attention on ways to improve mapping efficiency and accuracy. Continue reading
When talking about integration, whether it is application to application (A2A), business to business (B2B) or data and cloud integration, the goal is to exchange data, documents, information, and/or messages between a source and a target. To accomplish this goal, we identify business patterns related to the sources and targets of these “integrations” and then apply the identified patterns to similar scenarios that meet specified criteria. Most, if not all of these patterns will require data mappings, also known as “maps” or “rulesets”, at some point during the integration process. These maps are ultimately used to perform one of two operations: to transform or translate data. I often use the terms, transform and translate, interchangeably. But, is there more to them? Are these terms synonymous, or do their meanings differ?
Let’s look at this a little closer to see if there is a difference. Data translation can be defined as the process of converting data from one syntax to another and performing value lookups or substitutions from the data during the process. Translation can include data validation as well. One example of data translation is to convert EDI purchase order document data into purchase order database files or even flat files while performing data validation on the source data. Continue reading
Some of the most difficult challenges that integration projects face are what I like to refer to as “artifacts of black-box implementations.” These artifacts include incomplete specifications (both from external sources such as vendor specification or internal sources due to a lack of understanding of the interrelationships between systems), opaque processes from lack of visibility and undocumented integrations. Opaque processes and undocumented integrations present the greatest challenge because necessary information is not available due to ineffective communication, absence of the original source (often a single person) or integrations with legacy systems where there is no more vendor support. Most processes enabled by commercial applications are opaque. If you can’t determine (because it’s undocumented) (a) the identities and locations of all data that is added or updated by the process; (b) the rules under which data changes and additions occur; and (c) the identities and locations of all downstream applications, modules, and services that are invoked, then the process can be considered, to some degree, opaque. Continue reading