In software engineering we're often — and rightly so — looking for ways to be more efficient, to be faster than competitors, to reduce waste. Some of those ways work out really well and some turn out to be false economies where the time or other costs we initially save end up costing us more in the end.
Here are seven examples, with charitable titles to show why they might seem like a good idea.
Anticipating needs
When you're working on a feature to integrate a payment provider, it's tempting to think "we should build this feature to accommodate any future payment providers we may add".
This could be smart, if you suspect that it's going to take six weeks to develop and you know the business needs multiple other payment providers to be added six weeks from now.
If the need is not immediate, though, resist the temptation. Keep it simple. That will get you feedback quicker and it will prevent changing circumstances from making all that work obsolete.
See also the XP principles Do The Simplest Thing That Can Possibly Work and You Aren't Going To Need It.
Parallel work streams
If you have a team of five developers, wouldn't you get the most utilisation out of them by having each developer plugging away at their own piece of work?
Not necessarily. Pair programming, pair testing, ensemble programming, and ensemble testing have immense benefits for immediacy of feedback, the quality of delivered work, knowledge sharing, and team cohesion, against a cost of having multiple members of a development team working on the same thing.
To the uninitiated that cost is all that's visible. "How is it more efficient to have two developers deliver one thing instead of delivering two things?" Firstly, the cycle time is faster when collaborating. Think one thing delivered instead of two almost delivered. Secondly, the efficiency of team members working in isolation is illusory. With isolated work you miss out on the benefits of pair programming, pair testing or ensemble (or mob) programming or testing (PDF).
Deliver value now, refactor later
You really do want to deliver value quickly and unless you're in a safety-critical context (health, manufacturing) or compliance-driven context (government, finance) you can worry about making it better later.
In practice, there are three complications which make this a false economy if they're not accounted for. One, teams don't do this for one development cycle and then do the actual refactoring and iterating later. "Later" never comes, or at best some of it comes in ad hoc fashion over time. Second, hastily coded solutions tend to make a codebase less easy and safe to change. Third, when returning to the code later on you (or by this time, someone else) have to spend time getting fully reacquainted with the implementation and its context.
Consider this and then decide "yes, actually all of that cost is worth incurring for getting this code change out now" (could be a hotfix, could be preventing imminent churn of a major customer, could be a huge revenue builder) or "no, we can prevent all that cost by spending a bit more time now without losing anything".
A fully loaded sprint
A team is heading into a new time-boxed development cycle and their main goal is to deliver the most requested feature by customers, one that's not very exciting but removes a ton of clicks and data entry. However, the team's projected velocity is double what they need to deliver it. They fill the rest of their capacity with various bits and bobs from their backlog. Research, bug fixes, library updates, small feature requests.
A nice full sprint should lead to lots of good things delivered, but does it? What typically unfolds is that the primary goal doesn't feel primary even to the one or two persons doing work related to it. Questions, obstacles, and risks do not get identified or resolved as quickly as they could have.
Contrast that to a sprint where everyone is working towards the primary goal before attending to new work. It feels like a top priority, all communication and agile ceremonies are geared towards completing it, anyone not directly writing code is getting stakeholder input or mitigating risks. Probably fewer tickets will get completed, but the primary goal is actually delivered and with more confidence that it will indeed be what all those customers had been waiting for.
I'm not suggesting a team should work on only one thing per sprint, or even necessarily that the whole team should be working together on one thing at a time, but what I would suggest is that if a sprint has a genuine primary goal that it's treated like a primary goal instead of only being paid lip service to in the form of a "Sprint Goal" written down where no-one looks at it and a lot of the rest of the sprint being filled with low priority work.
Don't Repeat Yourself
Don't Repeat Yourself is probably one of the most well-known principles in software engineering. It makes intuitive sense. Don't code the same thing in multiple places when you could abstract it away into one place and call it from anywhere else. Then in the future you only have to make changes in that one place.
Unfortunately what this can lead to is premature abstraction or indiscriminate abstraction. Coming from the world of software testing, I've seen this a lot in UI test suites.
In premature abstraction, a UI interaction might be abstracted away into a helper function despite it only being used once. That's too early. You're now incurring the cost of the test being harder to debug for a benefit that does not yet exist.
In indiscriminate abstraction, everything is abstracted by default. Take for example the Page Object Model. It can result in beautifully formatted and easy to read specs, but when you're debugging someone else's tests it's not enough to read the specs, you'll now have to continually navigate under the hood between the specs and various supporting files to understand what's going on and whether the issue is in the specs, page selectors, global selectors, page helper methods, global helper methods, or the UI.
This doesn't mean "never abstract" or "never use POM". It means, consider the tradeoffs.
Repay technical debt when it hurts
Some companies reserve a 20% (or higher) "technical debt" margin for all types of technical tasks outside of feature delivery. The argument against this goes that people will then just look for things to do, things that aren't necessarily more valuable than backlog items delivering on the business vision. An efficient way of working would be to do only the most important thing right now.
The practical outcome is leaving technical improvements to be undiscovered or languishing on a backlog. Individually it's hard for these improvements to compete with revenue-driving user-facing features, but over time the codebase, the test framework, the local development environment, and the CI/CD pipeline become increasingly harder to work with and the application slower and/or less secure for end users.
You can be better off actually eating the cost of a guaranteed lower team capacity in order to gain the long term benefit of easy and rapid change and a reduction in cross-functional risks.
Hire more developers
You have three product development teams. You need double the work done. Being a leader who understands that growth requires investment, you greenlight a swift expansion of the teams to a total of six.
This never seems to work. Similar to Brooke's Law, doubling the number of developers doesn't double the amount of work done (or even maintain the current pace). The number of dependencies increase, the number of communication lines increase, there is lots of onboarding to be done, and either you have teams of new joiners who will be slower than existing teams for some time or you mix all the teams and slow everyone down (by less) with severe disruption to the existing team dynamics.
It may be counterintuitive when aiming for fast growth, but integration of new developers should be done slowly and deliberately in order to actually achieve the intended increase in pace (and have it be sustainable pace, too). The trick is to do it before you need them, even if that means lower utilisation until that need arises. If it's too late for that, the focus should be less on manpower and more on making tough decisions on priorities.