Author Archives: Greg Inns

What’s the Big Deal with Spreadsheets? Part 2

In my last blog entry here, I started a discussion of the technical challenges of using Spreadsheets as electronic business documents.  I want to continue that discussion, but instead of focusing on how data is located predictably in the spreadsheet, I want to talk about separating the formatting from the business data.

As you might imagine, consuming Spreadsheets and producing them presents two different sets of challenges.  When consuming them, the problem of locating the data in the spreadsheet predictably is the most significant challenge, next to data content concerns.  The formatting (fonts, colors, shading, graphics, etc.) generally does not matter when consuming the Spreadsheets, whereas when producing them, it can matter almost as much as the data content. Continue reading

What’s the Big Deal with Spreadsheets?

Spreadsheets are a little bit like hammers: you can use them for almost anything.  And, they’re a relatively simple mechanism so what could be hard about them…right? Humans seem to have a propensity to find new ways of complicating technology, and I think most of you have probably guessed that that applies to spreadsheets, too.  I don’t expect that will ever change.

Spreadsheets are a natural choice for people who don’t want (or don’t have the time) to invest in more formalized options like EDI or XML.  They’ve gotten a bad rap, perhaps because they’re arguably less sophisticated than the alternatives, but also because of the complication of dealing with them in an automated fashion.

Before we decided to add support for spreadsheets to our integration platform, we were asked by several prospects and even given some examples of what they needed to handle.  These cases were, of course, quite varied and we thought about it for a while, wondering if we could come up with something that would be useful. Continue reading

It’s All About the Model

For quite sometime, software developers have known that working with models is a much more efficient way to get a job done than by working at a lower level in the minutiae. One of the reasons for this is that people who develop meta-models have figured out how to hide details that can be derived, or to classify similar problems so that they can be expressed in an easier-to-understand way.

Consider, as an example, the problem of working with XML elements.  If I was going to write a program to read an XML document, I might make some assumptions about the cardinality of a particular XML element and, if appropriate, place a loop in my program that accounted for multiple occurrences of the element. However, that’s not the only way to do it. Alternatively, I might determine the cardinality of a particular element by inspecting the constraints of the element in an XSD.  What’s happening in this alternative approach is that I’m moving towards using a model (the XSD) to figure out what to do, instead of hand-coding the logic.  This is somewhat of an over-simplified example, but I think it illustrates the point that working with models eases the burden (of maintainability) of dealing with particular instances.

This idea of modeling as a way of solving problems is something we, at EXTOL, have been convinced of for a long time.  The benefits are far reaching both for us (as a software developers), and for our customers.  The trick here is in developing a meta-model (or language) that allows the user to express what they need in order to get the job done.  Representing the model to the end user in useful way is also a challenge.  I’ll talk more about that problem in a future blog.
In our world then, the model is the central thing. Our users build them (via visual representations…GUIs), we generate and compile from them, we inspect them for patterns, we create new instances of them and we even decorate or extend existing ones.  It all starts with the model, and nothing happens “downstream” that isn’t expressed (in some form) in the model.  Our models even refer to other models.

Our users build models to express the structure and attributes of metadata (think “a model that describes an XSD, Database, EDI, as well as a Spreadsheet or a Flat file”).  We call these metadata models “Schemas” (not to be confused with “database schemas” or W3C Schemas).  Our users also build models that express what to do when transforming one instance of a metadata model to another.  We call these transformation models “RuleSets.”  Since one of our goals has been to enable transformation without programming, we have spent a lot of time thinking about how to model the clever things that people want to do in their transformations.  The languages of these models continue to evolve over time.

In terms of our Transformation model, a rule can be thought of as something that describes some relationship between the source and target.  But not only that, a rule might also exist to help describe the sequence of execution (where that might matter), doing iterations on collections of instances of data, encapsulating other rules, or raising asynchronous events.  Other more common examples of Rules are simple data manipulation routines such as move, substring, and concatenate, while expressions such as “For Each Database Row, Create a New XML Element” are also just as common.

Now, what does all of this have to do with design-time automation and our Smart Mapping feature as an expression of it?  In short, because we have committed to and use modeling extensively, we are able to make our features do far more than one might expect at first glance.  We often get the reaction from prospects that, “I didn’t even think that was possible.”  That reaction still surprises me when I hear it.

The Smart Mapping feature in EBI produces (generates) rules in the RuleSet model, which (after the user approves) are indistinguishable from rules that might have been created manually.  This is significant because often times in this area, what is generated automatically is something that either can’t be maintained, or is so cryptically represented that it’s impractical to maintain it.

Another significant challenge in this area is the ability to control the scope of what an assistive mapping tool is looking at.  At one extreme, we find tooling that will generate instructions at the gross level (e.g. entire document to document) and then leave it up to the user to “clean up the mess.” This has some utility (and the EBI Smart Mapping tool can certainly do this), but I think the real value is in allowing the user to set the scope arbitrarily.  So for example, you can limit the Smart Mapping routines to looking at a single database table and how it might map to an EDI Segment.  This is important because most of the work that happens in the mapping area is maintenance, not starting from scratch.