architecture, Integration, Programming

Integration-first development

Yay, yay, I know its another one of those X-first development, but this one has been a bit of a revelation to me. I have been convinced of its value, so late in my career, when it should have been obvious day one. Its not that I did not know about it, but I never realized its paramount importance or conversely, the havoc it can cause, if not adhered to. Oh well, its never too late to do the right thing :). Now that I talked so much about it, let me say a few words about what it means to me. It means, when you are starting a new piece of work, always start development at your integration boundaries. Yes, it is that simple and yet was so elusive, at least to me until now.

We have had numerous heart burns in the past over, not being able to successfully integrate with external systems, leading to delays and frustrations. Examples of integration points in a typical software system could be web services, databases, dropping files via FTP into obscure locations, GUI. Number one reason for our failure to integrate, has been our lack of ability to fully understand, firstly, the complexity of the data being transferred across the boundary and secondly, the medium of transfer itself. We think we understand these 2 things and then proceed with the development with mostly the sunny-day scenario in mind. We happily work on the story development until we hit this integration boundary and then realize, damn, why is this thing working differently than we expected. Not completely different from what we thought, but different enough to warrant a redesign of our system and/or having to renegotiate some integration contract. By then we have spent a lot of time working on things which are very much under our control and could have been done later.

Now what are the possibilities that we could not have potentially thought about in the first place? Let me start with a real-world example that I have come across. We were integrating with a third party vendor that delivered us some data via XML files. We had to ingest these files, translate them to HTML and display on the screen. Easy enough? We thought so too. We got a sample file from them, made sure we were parsing the XML using the schema provided and translating it into HTML that made sense for us. We were pretty good at exception handling and making sure that we did not ingest bad data. The problem started when we were handed a file of over 10 MB of data. Parsing such a huge file at request time and displaying the data on the screen in a reasonable amount of time was impossible. At this point we were forced into rethinking our initial “just-in-time” parsing strategy. We moved a lot of processing offline, as in, pre-process all the XML in to HTML and store it in the database. This alone did not solve the problem, for it was still not possible to render the raw HTML on the screen in a reasonable amount of time, since the HTML had to be further processed to be displayed correctly with all the styling. Obviously the solution was to cache the rendered HTML in memcached. We hit another road block with memcached, being unable to store items greater than 1 MB in size. Surely, we could have used gzip memcached store or some other caching library but that wouldn’t have solved our problem either. We chose to create another cache store in the database. After doing all this, some of the pages were still taking too long to load because of the sheer size and as a result, the business had to compromise on not showing those pages as links at all. But this meant that we had to know the page sizes before hand, to determine where to show links and where not to. All this led to a lot of back and forth, which is fine, but had we known about this size issue earlier, it would have saved us a lot of cycles. By the way, we could have incrementally loaded the page using Ajax but it was too late and wasn’t trivial for our use case. We could have known about the size issue if we had integrated first and worked with a realistic data set rather than a sample file.

So as per the example above, our integration point dictated our architecture in a huge way and also, more importantly, business functionality which is another reason to practice integration-first development. In an ideal world, you would like to insulate yourself from all the idiosyncrasies of your integration points but it rarely happens in a real world.

Other issues around integration points that have influenced our design are, web services going down often, and hence having to code defensively by caching the previous response from the service and setting the appropriate expiration time on it based on the time sensitivity of the data. In some cases, you might want to raise an alarm if the service goes down. If you know that it happens too often, then you might want to have it raised less frequently and be more intelligent about it.

There have been cases, when we have had no test environment to test a service and hence having to test with the production version of the service, which could be potentially dangerous. We had to do it in such a way, so as to not expose our test data in production. This meant our response had to change based on what environment they were being sent from. Certain services charged us a fee per request hence we had to reduce the number of service calls to be more economical.

Some vendors offered data via database replication and occasionally sent us bad data. In this case, we had a blue-green database setup where we had two identical databases, blue and green. The production would point to one of them, say it was pointing to blue. We would load the new data in the green database, run some sanity tests on the new data and only when the tests pass, point production to green. We would then update blue to keep it in sync.

Some of the vendors offered data via flat files. Sometimes they would not send us files on time or completely skip them. We updated our UI to reflect the latest updated time so as to be transparent. In addition to the file size episode I mentioned above, we had another problem with large file sizes. Our process would blow up when we processed large amounts of data. It was because we were saving large amounts of data to the database in one go, which wasn’t apparent initially.

Performance of external systems was an issue and obviously affected the performance of our app. We had to stress/load test the external system before hand and build the right checks into the system if the system would not function as expected.

We have tried to mitigate these integration risks in a number of ways. We implement a technical spike before integrating with an external system, to understand its interactions or even feasibility in some cases. When integrating with data focused stories, what I have found to be most useful is, as a first step, striving to get all the (minimum releasable) dataset into the system, without even worrying slightly about UI* jazz. Don’t bother the developers with UI since it can be distracting. Have the raw data displayed on the screen and then “progressively enhance” it by adding sorting, searching, pagination, etc. This is popular concept used in UI view rendering where you load the most significant bits of the page first and then add layers of UI (javascript, CSS) to spruce it up so that the user can get maximum value first.

On a recent refactoring exercise, we moved a tiny sliver to our application to a new architecture to test out fully the integration points and have a working model first. After that it was all about migrating other functionality to the new architecture.

There are so many things that could possibly go wrong when integrating and it is always a good idea to iron out the good and bad scenarios, before you start working on anything that you know you have full control over. I am not saying you could predict all these cases before hand, but if you start integrating first you have a much better chance of coming up with a better design and delivering the functionality in a reasonable time.

I would like to leave with you two thoughts: integrate-first and test with real data.

* UI can have exacting requirements too, in which case it should be looked at along with data requirements. Some UI requirements that could influence your architecture are loading data in certain amount of time or in a certain way, like all data should be available on page load.

Standard