This blog post is part of an ongoing series encompassing the top must-haves when selecting an integrated solution.
The term “ease-of-use” undoubtedly carries a different meaning today than it did even a decade ago. Think smartphones and tablets for example. A simple tap of a button almost instantaneously delivers the application and content we desire – no training required.
While integration software may lag a bit behind our smartphones, it is still possible to implement intuitive and easy-to-use solutions that get the job done quickly and with minimal effort. You simply have to be mindful and do your homework before selecting a solution.
Integration software, particularly programs that meet very complex requirements, can appear overwhelming at first. And while you can begin using integration software without training, a bit of online or phone-based training can pay dividends as there are proper ways to navigate and leverage tools more effectively and efficiently. Continue reading →
Class and object design can be a tricky thing. It’s not really something that can be taught or quantified. This skill is developed through years of experience. This may be more of an art than it is a science. No expert can come up with an exact formula on how this should be done but all experts can pretty much agree on what is a bad design.
A possible starting point of a process could be to develop a written problem statement outlining the issue at hand. This can be done by interviewing a customer or whoever the requirements are coming from. Once the problem statement is developed you will find that the nouns in the statement will turn into classes and the verbs in the statement will turn into operations. From there relationships between the classes can be derived. Statements like “is a” can denote inheritance, “has a” can denote composition, “uses” can denote delegation, etc.
A UML diagram should be drawn modeling what was discovered by the problem statement. It is also good to keep a catalogue of design patterns at hand so that a given pattern may be applied to a module of the software system. Remember that Object Oriented Software is about loose coupling, encapsulation, and reuse, which design patterns help enforce. Once the initial diagram is developed it is really more of a guideline than concrete fact at this point. When the implementation process begins new facts will come to light that haven’t been considered before which will ultimately force the redesign some components of the software. Once this happens the diagram should be updated to reflect the new functionality. This will create a software cycle that will bounce back and forth from implementation to redesign approaching a final version of the software.
This is really only a generalized overview of an absolutely monstrous topic in which there are many different paradigms and many different ways to go about tackling the problem. As a closing thought, keep in mind that a design is never really complete, that it can always be improved upon and made more robust, clearer, and more efficient.
If you’ve ever dealt with changes to a working version of a schema, whether it is database, EDI, XML, or whatever format your data may be in, then you know how painful it is. In most shops in the typical data processing scenario, either a tool or a custom program is used to process the data in one format and convert it to another format to be piped off for further processing somewhere else. The most difficult to deal with example can be changes to an XML schema. The reason is that XML is so extensible and just about anything can be done with it. The contrasting example would be EDI data where the changes are usually miniscule and the structure itself does not vastly change. The typical example that most IT shops face is a change in a database which could be the addition of a table or column, the deletion of a table or column, the moving of a table or column, or a change in table/column properties.
If we look at this from the perspective of a model, a schema is really a tree or graph (depending on whether it’s recursive) with entities representing the schema structure. Continue reading →
When you ask recent IT graduates about the IBM i they either have never heard of it or think of green screen and RPG. The IBM i has been more than green screen and RPG for a long time and has progressed along with the needs of its customer base, always keeping pace with current trends. It was 20 years after the original release of the IBM System/3x, lineup that Sun released it first version of java. And java was not widely used until a few years ago. Those first releases of Sun java performed poorly. The first releases of java on the IBM i had the same poor performance issues, but with the recent releases of the OS and Power hardware, java performance on the IBM i is now inline with other platforms. Continue reading →
There’s a lot going on in our system…. What does it really mean?
Complex Event Processing (CEP) is an emerging field that leverages the transport-level layer of the Event-Driven Architecture (EDA) model and applies an “analysis” layer on top of it. EDA enables you to monitor and analyze important events that affect your business – like unusually large orders, significant inventory draws, critical processing delays, high- and low-water resource thresholds, and even suspicious activities – and provides a technological basis for responding to those events.
Supplementing conventional batch processing with EDA requires a mindset change, but once you have an EDA implemented, a lot of useful and actionable information about how things are working in your system becomes available. However, coordinating this analysis and correlating disparate events into meaningful information is not a trivial task. Continue reading →