Keep your development team productive by preventing functional debt

by Xavier Talpe 2 years ago in Product Management

4 min read

When talking about software debt, people generally tend to only think about technical debt:

You have a piece of functionality that you need to add to your system. You see two ways to do it, one is quick to do but is messy - you are sure that it will make further changes harder in the future. The other results in a cleaner design, but will take longer to put in place. - Martin Fowler

Technical debt comes in many forms but ultimately it is a reflection of your codebase, documentation and architecture. Simple examples that cause technical debt are: duplicated code, no tests or a lack of documentation. Technical debt also comes with a cost, as it tends to decrease the productivity of your engineers because they have to rewrite bad code or deal with the implications of it such as bugs.

The same motivation that can cause technical debt ("quick but messy") can also lead to an entirely different kind of debt: functional debt. Since I couldn't find a proper definition of functional debt on the internet, I tried to come up with a definition myself:

Functional debt is the difference between what is currently implemented and what your end-user really needs

Two keywords are really important here: functional and debt.

Functional because we are talking about actual functionality for end-users.

Debt because it comes with a long-term (maintenance) cost that tends to increase over time.

Technical and functional debt are not necessarily correlated. A feature that perfectly addresses the needs of an end-user (and thus has no functional debt) could suffer from a tremendous amount of technical debt. The opposite is also possible. A feature could be implemented with almost no technical debt whatsoever but if it fails to fulfill the needs of end-users, you still end up with functional debt.

Let's further clarify this with an example.

Imagine you have a web application for creating and managing invoices for large enterprises. Because every invoice is linked to one customer, part of this application requires the need to manage customer data: creating, editing, deleting and viewing customer data as well as being able to see which invoices are linked to which customer. This application is also used by an entire team of support people helping out customers in case they have any questions about their invoice(s).

During the early phases of building the application, the product manager decided it would be a good idea to add a global "search customer" feature. An example use case for this feature could be that in case of an angry phone call of a customer complaining about his last invoice, a support representative is able to quickly search for this customer in the system, allowing the support representative to view the last invoice(s) that were sent to this customer.

Sounds easy, right? That's also what the product manager thinks. Despite the team of software engineers raising some objections about the "easiness" of this feature, the product manager decides that, because of time constraints, a simple free-form search field will be added to user interface. This will allow the user to enter the name of a customer and then, using some API endpoint, do a search in the database to see if a customer exists with an identical matching name.

Of course, searching for a customer requires the user of the application to enter a name. Anyone that ever made a phone call to a support desk knows how hard it is for the support person to correctly understand your name over the phone, let alone type it. As it turns out, the current version of the customer search feature isn't a great success. Users are expected to enter the the name of a customer with 100% correctness whereas in practice users would much rather have some kind of fuzzy or approximate search or even some kind of suggestions algorithm. From a technical perspective the search functionality works just fine, but from a user perspective there is definitely room for improvement.

Unfortunately for the users of our application, there's very little time to fix or improve this functionality. Instead there's a more urgent feature that needs to be implemented: a privacy regulation (GDPR) feature.

One of the requirements of the new feature states that only some users in the company are allowed to access the entire list of customers. If the company happens to have some VIP customer (for example Bill Gates), it might be best if this information isn't publicly available to all support employees. Despite the fact that our current search feature isn't very popular for users (and probably requires a big refactoring in the future), it still needs to be updated for our new privacy feature such that the search results can be filtered based on the permissions of a user.

This example demonstrates how the scope of new features can be negatively impacted due to existing poorly defined or poorly working features. Sure, depending on the architecture of your application this particular change might not require any additional development at all, but most likely there will at least be something that has to change. The fact that this feature exists also means that it at least needs to be brought up during analysis, design, testing etc. Worst case, it adds significant overhead to the implementation of new features.

So is it bad to have functional debt?  As always, the answer depends. If your motivation is to gain a better understanding of the needs of your customer or you have a first to market strategy, then intentionally creating functional (and technical) debt can be a great strategy.

Finding the right balance however, needs to be an explicit focus for companies to succeed in the long term. The pressure to keep building features often leads businesses in a situation where they are suddenly grinding to a halt – unable to deliver neither quickly, cheaply or with any kind of confidence in the result. This in turn leads to longer release cycles, slower feedback loops or lower quality features.

So how can we get rid of functional debt? Unlike technical debt which is essentially something that can be solved by performing an engineering effort, functional debt requires a more thoughtful approach.

For example: simply removing the search functionality explained above would surely upset at least some of your customers. Or even worse: it could break their entire workflow, rendering your product completely useless for some of them! Sure, upsetting or even losing customers is scary, but is that such a bad thing if you can replace it with something better, something that provides even more value to your existing customers?

What most technical interviews are missing

by Xavier Talpe 2 years ago in Technical Interviews

2 min read

Imagine you're a restaurant owner and you're looking for a new chef. You put out a job advertisement and you receive a number of applications. You're not familiar with any of the candidates nor the restaurants they worked at in the past. How would you validate if a candidate is worthy of being a chef in your restaurant?

During my career I've participated in dozens of technical interviews, the majority of them as a candidate and some of them as an interviewer. As a candidate, every technical interview seemed to be completely different than the others. Questions would range from very specific algorithms and data structures (suffix trees) to common terms in OOP (class vs object), as well as solving Codility exercises (turtle graphics) or running through a variety of Javascript oddities.

While the majority of these questions often sparked some interesting technology debates (and I almost always learned something new), I often felt that these interviews rarely gave a good insight into  my technical skills. A feeling that apparently is quite common amongst programmers.

Fortunately, not all interviews are the same. A couple of years ago I applied for a job as technical lead/architect at a startup. This interview ended up being the most honest, interesting and challenging interview I've ever had.

Before attending the interview, I was asked to bring along my development laptop for some live coding exercises. The interviewer, a senior programmer himself, started the interview by asking me some basic questions about my resume, what projects I worked on... etc. For the technical interview itself, he had prepared a variety of technical challenges I had to solve: from parsing a CSV file to writing a solution for the producer-consumer problem. Unlike past interviewers he didn't ask for a "thinking out loud" solution. Instead, I was asked to write an actual working program on my laptop for every one of the technical challenges he had prepared.

With my laptop hooked up on the big monitor in the room, my interviewer could easily follow along on my journey. Not only could he see the actual code I wrote, he was also able to follow my entire thinking process: from analysis, to design, implementation and even testing. He would also regularly ask questions about why I made certain decisions, how a specific class/method would behave or what I would do different if the use case was slightly different.

What felt so good about this interview was the fact that I was solving real-life problems using real-life tools (a.k.a writing code in an IDE). At the same time it also presented my interviewer with a very honest look at how I work: how I analyzed the problem, the questions I asked, the iterative way in which I built a solution, what I do if I'm stuck, how I write tests and use the debugger...

Seeing people code live gives you an exceptional insight in how a candidate thinks, works and communicates.

Based on my personal experience of interviewing candidates for a position, I can also say it makes hiring decisions so much easier compared to a traditional approach of asking a list of "theoretical" questions.

After all, if you had to hire a chef to work in your restaurant, wouldn't you at least make them cook a meal before deciding to hire them?