Framework Complacency

This is a topic that is near and dear to my heart, in part because I believe I was once in the "wrong" camp - and likely still am in some respects. Essentially what I mean by "framework complacency" is the apparent tendency for popular frameworks to morph design patterns and industry standards by way of various types of functionality. This may sound confusing, but have no fear, I will break it down in great detail.


Hard times create strong men

I say "men" in the least gendered way possible by the way – this quote just makes more sense this way. I address men, women, and everyone in between (or outside of that spectrum entirely for that matter).

When times get tough, engineers get stronger. The simple and easily observable truth is that challenging ourselves directly improves and hones our skillsets. Without challenges, we at best stay in place and at worse regress in our abilities. It is somewhat ironic coming from me – a millennial – talking about "hard times" in the context of the programming and software engineering world. I am privy to countless high-level languages that abstract away anything remotely difficult about computing.

I believe it is important to deeply understand the tools, platforms, languages, and hardware that you intend to work with. This challenges your current abilities, and gives a more abstract understanding of what you previously thought you knew. You go through the "hard times" to become a better engineer.

Strong men create good times

From electrical signals to binary, binary to assembly, assembly to C, and eventually on to high-level languages we know and love today, we are without a doubt enjoying the "good times" kickstarted by the early computer scientists and engineers. The times are so good, in fact, that even our high-level languages – which make hardware integration or TCP network connections a breeze – are becoming easier and more streamlined to use.

Herein lies the problem: the further we drift from the "concrete", the less we truly know about the things we are developing. As the software industry grows and demand for engineers continues to skyrocket, it is more important than ever that we take a step back and fully determine and understand not only the what we are building and why, but also the why and how it works the way that it does.

Abstraction is an amazingly wonderful concept, in abstract (no pun intended!). I believe it is invaluable when it comes to planning and architecture for a project, and that it is a necessary – and should be mandatory – step in the software development process. However, when we come to rely on the abstraction that is put in place by others without fully understanding it, we can fall victim to a lack of stimulation – a lack of challenge – and begin to morph into the framework that you are building on. So much to the point that you and that framework may be inseparable – and at this point you've already lost.

Good times create weak men

The software world moves at what seems like a million miles an hour on average. This creates an ever-changing landscape of tools and frameworks, new code and deprecated code, all the while delivering mounds of technical debt. Acknowledging that we cannot avoid technical debt entirely, we should focus on ways to mitigate it.

One of the best ways to mitigate technical debt straight out of the gate is to minimize the "deprecation surface area" within your projects. This is the number of areas where external forces may require you to take an action within your project in order to keep it functioning. The most obvious of these areas are dependencies and packages, which again, cannot really be avoided entirely.

This is where I believe a decisive and tactful approach is needed: to carefully chart a course from a blank project to MVP, avoiding looping in external dependencies unless absolutely necessary and avoiding relying on abstraction provided by external tooling unless absolutely necessary.

Avoid relying on abstraction provided by external tooling unless absolutely necessary

It is far better to create your own abstraction layers within your project to handle the use of abstract implementations provided by a dependency. The idea here is to essentially create a layer or a "driver" that is maintained by your codebase which can provide access to an underlying dependency. This would in theory allow you to swap out dependencies without drastically altering your codebase.

Weak men create hard times

It's likely that many software engineers share a similar experience: technical debt hell. This can be "dependency hell", "code spaghetti", both, or anything in between. This is usually a result of relying too heavily on external dependencies, and using them sporadically throughout your application.

At some point a refactor may be needed, and you will be stuck tracing dozens, hundreds, or even thousands of references or instances of a given piece of code. The "refactor and find" game will always be around to haunt us, but knowing so we should at least attempt to make things easier for future-us when designing applications.

The worst contender for creating hard times faster than not is the engineer who claims that a refactor, change, or dependency swap will never be needed. The old adage "never say never" applies ten-fold to software engineering in my opinion, and Murphy's Law of course has found itself right at home in our industry as well. To think that an application implementation will never change is, to be blunt, foolish. Planning an application includes planning for its future, which will almost certainly involve maintenance.


So what now?

My favorite question: so what now?

It's very easy to sit back and criticize, but it's more difficult to provide solutions. Especially in this case, it is actually impossible to find the "right" way; in fact, finding the "right" way is usually in itself indicative of complacency.

Overall, the approach that I take to software design is a "top down" approach.

Top Down Approach

In abstract, the approach is made up of 5 steps:

  1. Triage high-level requirements and assess business value
  2. Determine technical workload and abstract definition of functionality
  3. Determine semi-concrete technical implementation plan
  4. Break down implementation plan into actionable tasks
  5. Roll up tasks into minimum-viable components of a feature

Triage High-Level Requirements

These are business requirements such as "the customer would like to be able to send a text message to support". There are no technical details here really, although I'm sure you could begin extrapolating them immediately. At any rate, this is something that a product manager or higher would likely request.

We must also determine the business value. This may have already been done by the business themselves, so as engineers we may not need to worry about this. However, there may be cases where the engineering team is consulted to determine the impact of two or more features, for the purpose of prioritizing one over the other.

Determine Technical Workload and Abstract Functionality

This is where the engineering team starts to come in. More specifically, the senior or lead engineer(s) should be consulted for this step. Here we will determine an abstract technical workload: this would essentially be a guess by the engineering team on how long or how much effort something may take. This is immediately useful to the business side of the software, as it may inform priorities.

We also need to begin to map out some abstract functionality of the features requested. This can be done easily with pseudocode. The result of this process may be a flow chart, diagram, or writeup detailing each major component of the feature and how it may work at a high-level.

For example, we may determine that a feature needs to contact a certain service or API, for which we would detail that as an individual component of our plan. If this is ultimately consumed somewhere, we would then detail that as another component of the plan.

This begins to create a breakdown and you should begin to see how certain technical tasks will shake out.

Determine Technical Plan

Here we will go for "semi-concrete" – meaning that we can begin referencing specific areas of our actual codebase when writing up the implementation plan. This may include language such as "use the DomainApiService to fetch X data and send it to the Y processor service". This plan is not "fully concrete" in the sense that it should still make use of pseudocode for the most part, and doesn't need to be explicit in the fine details such as how to connect two services together.

Using our brief example above, we don't necessarily need to define the mechanism that data will be send from DomainApiService to ProcessorService – we only need to identify that the data needs to make that connection.

Break Down Actionable Tasks

At this point, you should directly involve your engineering team – depending on team size you may wish to involve everyone, or if you have a sufficiently large team you may wish to gather a subset of engineers for this step.

Here we will take the "semi-concrete" technical implementation plan and translate that into individual, actionable development tasks. These are tasks that a developer should be assigned to, and should be able to pick up and begin working on. Of course there may be cases where further clarification on the task is needed, but ideally they are written with enough information and acceptance criteria for a developer to self-start.

It is useful to involve more of the engineering team because you will begin exposing them to the work that is on the horizon. I have found that individual contributors often prefer to be involved in the tasks closest to them; that is, the tasks that they will be working on or are working on. This varies by engineer of course, but generally people would rather stay away from planning features 6 months out for instance.

However, involving the engineering team in the stuff that is directly on their horizon will give them a "heads up", allow them opportunities to raise concerns about the implementation plan, and offer their opinions on the effort needed to complete each task.

Roll Up Tasks

As the tasks are worked on and completed, they should "roll up" into functional features. Tasks should be prioritized in such a way that:

  1. As few conflicts as possible are created - avoid putting two or more developers on tasks that will affect identical locations in the codebase. This will speed up development.
  2. Tasks are completed to create incremental functionality for the feature - there will often be blockers, but generally we should avoid putting the tail before the horse so to speak: start with the low-level stuff that will supply functionality for the high-level stuff.
  3. Backwards compatibility is retained until the feature is completed - this is often optional and depends on your project, but I find it best to segregate breaking changes until the entire feature is completed.

Conclusion

As you may be able to tell, I am quite opinionated when it comes to software engineering. I do believe that, with each "generation" of tooling and languages, there is a "best practice" way of doing things. Not necessarily a "right" way, but a "best practice" way – meaning that for the most part, any developer you encounter in that space will be familiar with the practice.

Ultimately, I think most teams need to just take a step back and slow down. Velocity is important, but solid and decisive architecture is much more important. The only situation where this may not be true is in small side-projects, projects that may never reach completion (I have many of those). I do not believe that any business big or small has a legitimate reason to "rapidly develop" at the expense of well-thought-out architecture. They are welcome to disagree, but I can tell you that any job opening that mentions "tech debt" as a major component of the position often fetches higher engineer rates :)

Nicole Wilging

Nicole Wilging