Impact of Implementation Of Ground-Up Architecture

Recently I’ve been working on a project where we have been switching over the datasource code from one code base to another.  The codebase is pretty monstrous and has at least 30 developers checking in on it.

Many things  bothered me about the tasks in this. There was no documentation on it, at all.  So we had to spike out over and over what this new codebase was, and how our old codebase worked.  Also, a lot of baseline integration tests had been removed so if we did changes we could have little confidence as to what we made work corerctly, or broke.

I guess when you have to do this kind of thing there’s no roadmap and its impossible to estimate == even though management presses down on your team for that information.  The re-architecture had no architect oversight, no visibility to the business (like an invisible oil change) but was in reality a strategic move: the change would put our application on a shared code base with other applications, and possibly opening the door for more feature work more quickly because of this.  Maybe the end-game customer wouldn’t not see anything but a better app, but the tech team’s business customers should have known about this major re-work.

Most of the spiking was done by me and a cohort for a few months.  We had object designs and impedances, dead ends, all that good stuff when you wander into the unknown.  It was tough because there was really nothing to be accountable for, until at the end we produced a workable plan to get the job done.

Remember this is a monster codebase.  And the current data system had at least 4 different types of implementation so you couldn’t just “switch” — there was business logic buried into the way we did our datasources.  The system was a sharding system spreading data out across multiple databases based on an encrypted key; and the key could be almost everything and that, too, was tied to rules.

When I sat looking at the entire set of requirements we had gathered the solution stood out — a series of small tiny changes over all the classes over time.  The METHOD to do the work actually was dictating the architecture we’d live with until the conversion was complete.  We would have two data systems up, and slowly switch, class by class, over, until complete.  After looking at what we could do, this would work the best over the whole-hog conversion my colleagues were pushing.

Sounds nice and agile, right?  It is.

But I realize there is a weakness in this approach: during the conversion there is opportunity for abuse of having two db systems in.  And, you can say “well just tell everyone and if it gets abused that’s a communication problem.”  No, not true.  Developers on the team have to drop in features and get them to work — so they may be forced to do one thing or another at that time to make the application work.  It’s called “continuous integration.”  So the weakness is, a small methodological conversion creates more tech debt over time that has to be refactored over time.

This tech debt, though, is very acceptable because the small changes are easier to test and track than a one-time whole conversion.  And if the system gets a bit abuses, when we get to those classes we’ll fix it.

It’s interesting how a good methodology can save the day.

Comments are closed.