Skip to content

Process

An Alternative Approach to Sprint Planning - Introducing Hippogriff

Background

One of my first jobs in engineering was working for a medical device company. This is was pretty cool as I wrote software that interacted with the device to show measurements and give recommendations to the physicians about those measurements.

To ensure that things were working correctly, we had a department called V&V (Validation and Verification). I had never heard of this term before in my career, so my boss told me that it was responsible for ensuring that we built the right thing because it's engineering's job to build it right.

These two principles (build the right thing and build it right) have stuck with me during my career. So much so, I believe this may have been the start of my interest in product engineering and wanting to understand the "why" behind the stuff that I build.

It was the same job that I was introduced to the concept of Kanban, and the idea of eliminating waste from our process as it inherently slows us down by focusing on the wrong things. I'm known to be a process improver at heart and the idea of Kaizen (continuous improvement) resonates with me.

So when I think about how many teams are tackling work breakdown and estimating, I can't help but think that we're spending our time on the wrong things and not enough on the right things.

My Experiences with Engineering Teams Today

A common piece of feedback with teams that are following Scrum principles is that they feel like they are in meetings all the time and there's no time to do the actual work. As someone who has a love/hate relationship with meetings, this is totally understandable.

There are a set number of meetings ("activities" in Scrum parlance) that teams follow, one of which is sprint planning, a time to make sure that what the team is working on is aligned with priorities from product.

While I find value in the synchronization with product, I don't find very much value in the estimation portion of the meeting as it gets the focus on the wrong things.

If we look at the value that estimates provide, the goal is to ensure that the team is taking on a reasonable load for the sprint (e.g., what are our commitments) and not over-extending as that can cause burn-out.

I have two problems with estimations. First, they're way too easy to be taken as a deadline (so much so, that you've probably seen this meme spread around). Which in turn, causes the team to go deeper into estimating stories (breaking them down to single tasks), which causes a feedback loop.

If this keeps up, you'll eventually find yourself going into Waterfall mode where the team needs every single requirement up front before they can do any development, the opposite of what we're trying to do.

Instead of the team focusing on estimations, I'd much rather see them strategize on an approach to the problem and let that be the driver to the work.

One approach that I've seen teams take is to use "relative sizing", so instead of saying that work will take four hours, they might say that it's a "2" (if using the Fibonacci approach) or it's a "Small" (if using T-Shirt sizing).

Side Note: I've also seen fruit and animals as relative sizing options.

The problem I have with relative sizing is that it can be helpful for the team, it's total nonsense for business stakeholders. For example, let's say that we're working on the new Shiny Widget 9000 and Marketing wants to know when it's gong to be ready so that they can start getting the marketing materials ready and start promotion. You can't tell them that it's going to take 108 story points or 16 Mediums as that is meaningless.

They need a timeline, so what does engineering do? They look up historic trends for the team and say something like "We typically get 4 Mediums done in 2 weeks, so we're looking at 2 months, give or take". Which means that we're correlating a relative size to a unit of time. So what value did we gain?

When I think about planning, I'm focusing on stories that are independent, deliver value, and can be accomplished within a sprint. I don't necessarily care if the story is a 2, 3, or 5 as long as the team has a rough approach to the work and understands why we need the functionality.

A Different Approach

Instead of spending time on estimates, what if we approached planning this way?

  1. Product and Engineering work together to break priorities down to smaller items that can be accomplished in a sprint (remembering to keep them independent).
  2. Team works together to move the stories in priority order (taking into account dependencies).
  3. Team takes on the first item.
  4. Product and Engineering can move/re-prioritize items as needed, but can't modify work in flight.
  5. As an item is completed, the team takes on the next important item.

This focuses the meeting on the important things (what's important for us to work on, how would we approach it) and less on the things that don't matter (fine-tuned estimations).

For those keeping track, this sounds pretty similar to Kanban (which it absolutely is 😀), but if you're looking for a catchy name, you can call this the Highest Priority Goes First ( or Hippogriff) framework.

Long story short, I'd like to see teams spending less time on estimations and more time on figuring out what the problem is, solutioning, getting feedback, and iterating.

Isn't that what the Agile Manifesto is all about?

While working on this post, I found this article by Mountain Goat Software to be immensely helpful in capturing the goals of sprint planning and some external validation that I'm not the only one who's experienced this problem as well.

Running Effective Experiments With the Team

As a leader, you're always on the look out for new tools and approaches to help the team be more effective.

But what happens when you have an idea? How do you introduce it to the team and get buy-in? How do you encourage others to propose ideas as well (remember, you're job isn't to have all the ideas, but to encourage and choose the best ones).

Let's say that the idea works, what happens next? What if it failed, what do you do next? How do you share your lessons with others?

In this post, I'll walk you through my approach for running experiments with the team and how to answer these questions. Like any other advice, I've found success using this process, but you might find that you'll need to tweak or adjust for your team.

Working In the Open

When it comes to the team, I'm a big proponent of working in the open. Not only does this reduce the amount of questions from my leader about what we're doing, it also empowers others to chime in when they see something off or the team going down the wrong path.

With this philosophy in mind, I document our experiments in the team wiki. Now, I know that we should favor people over processes, however, I have found immense value in taking the 10 minutes to document as this helps get everyone on the same page and when we look at these experiments later, we have the context behind the experiment.

To me, this no different than a scientist writing down their experiments for later reference.

Defining an Experiment

As you might have guessed, I'm a big fan of using the scientific method for engineering work and especially so when it comes to experiments. As such, I capture the following info:

Purple liquid being injected into one of many test tubes
Seriously, if we're not taking notes, what kind of scientists are we?
Photo by Louis Reed on Unsplash
  • Context - Why are we doing this? What inspired the experiment or what problem are we trying to solve?
  • Hypothesis - What change are we proposing and what outcome are we looking for?
  • Implementation - How are we going to run this experiment?
  • Duration - How long are we going to run this experiment for?
  • (Optional) Immediate Failure Criteria - Is there anything that could happen during this experiment that would cause to immediately stop?

For those looking for a template, you can find a markdown version in my Leadership Toolkit on GitHub

Scheduling the Retrospective

With the experiment documented, I send a meeting request the day after the experiment is scheduled to end. The goal of this meeting is to reflect on the experiment and to decide whether we should adopt the changes or to stop.

Leading the Team

After sending this meeting, my job is to help the team implement the experiment and coach/encourage as needed. Since it's a process change, it might take a bit for the team to adjust, so showing some patience and understanding is critical here.

While we are going the through the experiment, I'm going to note any changes that I'm noticing. For example, if we're running an experiment to have asynchronous stand-ups, I'm going to take notes on how I'm feeling about the team getting updates and how they're communicating with each other.

Depending on what comes up in our one-on-ones, I might even use this as a starter question.

Retrospective

Once the experiment has ran its course, it's time to reflect on the experiment and decide as a team on whether to adopt the changes or reject them.

To prepare, I'd recommend getting the right people in the room and setting the context.

During the retro, the team should be doing the majority of the talking. Your role is to seed the conversation and make sure everyone gets their opinions out. I like to capture these notes on a board so that the team has clear visibility on what worked and didn't work.

Once the notes have been added to the board, it's time for the team to decide to adopt the changes or not. During this step, I remind the team that this process isn't set in stone and if we want to tweak it in a future experiment, that's normal and encouraged.

At this point, I update the experiment write-up that we did earlier with the team decision and the logic behind the decision. This provides an easy way of sharing our lessons with others.

Sharing Outcomes With Others

One cool thing about leading teams is that no two teams are the same. Between the personalities, skills, company culture, and motivations, what works for one team won't work for another team (and the other way around).

Because of this, it's critical to share your results with your leader and your peers. This way, they could see what we did, what worked, what didn't work, and hopefully get inspired to run their own experiments with teams.

If the team paid a learning tax for an experiment, why wouldn't we share those results with others so that they can learn from our experiences? They might be able to make suggestions to turn a failure into a success or to ask questions about how we dealt with an issue.

The group being successful is your success, do don't hoard knowledge, share it with others!

With the write-up completed, you can start simply by sending a link to the group. A better approach would be to have a standing agenda item for your team lead meeting where leaders can talk about experiments that have been ran recently and their outcomes.

Common Mistakes

When I've worked with leaders to introduce experiments, it can be a lot to take in because this is a different way of thinking. This is especially true if leaders are not in a psychologically safe environment or if they have priori experiences that weren't successful.

I can't guarantee that you'll run experiments flawlessly, however, if you avoid these common mistakes, your odds of success will be higher.

Not Time Boxing the Experiment

One of the key features of the experiment is that it's only going to run for a set period of time, so that if you find that it's not working, you've not permanently impacted the team.

If you have an experiment that's going to run into perpetuity, that's not an experiment anymore, that's a process change and that shouldn't go through this workflow because experiments can be abandoned, but process changes typically can't.

Treating Experiments as Foregone Conclusions

At some point, you're going to get a directive from your leader that you don't agree with, but you need to commit to the idea anyway. You know the idea isn't going to go over with the team, so you think framing it as an experiment can soften the blow.

DON'T DO THIS!

Jimmy Fallon saying It's Totally Not Worth It

Really, don't do this!

Experiments are just that, experiments. They are not a vehicle for you to make unpopular changes. If you use experiments for slipping in these types of changes, then the team will learn that experiments is code for "not great idea" and they'll stop using the process.

Remember, experiments are ideas that you and the team come up with to make things better, not directives from the top coming down.

Now, you could use an experiment to figure out a way to carry out the directive. A good leader tells you where we have to go, but not necessarily how to get there. The experiment could be to figure out how to get there, but not what the destination should be.

Running Multiple Experiments

When getting a new team or after identifying multiple areas that a team could improve in, it's going to be tempting to want to implement multiple changes at once.

Resist the urge.

Remember, an experiment, by definition, is a process change. So the more experiments you run, the more process changes happening, which in turn puts more stress on the team to remember all the changes.

In addition to all the process changes, you might find that one experiment futzes with another experiment and you may not get clear results.

Let's say that we had two experiments going on at the same, asynchronous stand-ups and spending Tuesday afternoons in independent learning. During your one-on-ones, you get some feedback that it's a bit odd to not know what other team members are working on.

What's driving that? Is it the async stand-ups? Or is it the dedicated learning time? Could it be both? You can't be sure.

Another way to think about this is to think about debugging a program. If something's not working, do you change 5 things at once? No, you're going to change one thing, re-run, and see what happens.

Same thing for experiments.

But Cameron! This team is a hot mess and could stand to improve in so many areas, what should I do then?

Instead of running all the experiments, instead, the team should decide which experiment would have the biggest payoff and then pursue that one. Remember, you're not playing the short game, but you're in for the long haul, so you'll have the time to make those changes.

Troubleshooting a DynamoDB Connection Issue

Most of my blog posts cover process improvements, leadership advice, and new (to me) technologies. In this post, I wanted to shift a bit and cover some of the fun troubleshooting problems that I run into from time to time.

Enjoy!

The Setup - How Did We Get Here?

At a high level, the team had a need to process messages coming from a message queue, parse the data, and then insert into a DynamoDB table. At a high level, here's what the architecture looked like:

graph LR
Queue[Message Queue] --> Lambda[Lambda]
Lambda --> Process[Process?]
Process --> |Failed| DLQ[Dead Letter Queue]
Process --> |Success| DB[DynamoDB Table]

The business workflow is that a batch job was running overnight that would send messages to various queues (including this one). The team knew that we would receive about 100K messages, but had plenty of time to process them as this data was not needed for real-time.

What Went Wrong?

For the first night, everything worked as intended. However, for the second night, the team saw that only some of the messages made it to their DynamoDB table. A non-trivial number of them errored out with a message of Error: connect EMFIL <IP ADDRESS>.

I don't know about you, but I had never seen EMFIL as an error before and the logs weren't very helpful on what was going on.

Doing some digging, we found this GitHub Issue where someone has ran into a similar problem.

Digging through the comment chain, we found this comment, stating that you could run into this problem if you were exhausting the connection pool to DynamoDB.

Ah, now that's an idea! Even though I hadn't seen that error before, I know that if an application isn't cleaning up their connections properly, then the server can't accept new ones and that would fail the application. With almost 100K messages coming through and the large amount of failures, I could absolutely see how that might be the issue.

Inspecting the Code

With this in mind, I started to take a look at the lambda in question and found the following:

1
2
3
4
5
6
7
export const handler = (event) => {
  // logic to parse event

  const dbClient = DynamoDbDocumentClient.from(new DynamoDBClient());

  // logic to insert event
}

Aha! This code implies that for every execution of the lambda, it would attempt to create a new connection.

But Cameron, hold up. Yes, it will create the connection every time the lambda executes, but once the lambda is done, the connection will get cleaned up, so will it really try to spin up 100K connections?

You're right, when the lambda goes out of scope, the connection will get cleaned up.

But don't forget, it'll take the target server (DynamoDB) some time to tidy up. The problem is that since we were slamming 100K messages in rapid succession, DynamoDB didn't have enough time to clean up the connection before another connection was requested. And that was the problem.

Resolution

Now that we have an idea on what the problem could be, time to fix it. In this case, the change is straightforward (though the reasoning might not be.)

So instead of having this

1
2
3
4
5
6
7
export const handler = (event) => {
  // logic to parse event

  const dbClient = DynamoDbDocumentClient.from(new DynamoDBClient());

  // logic to insert event
}

We moved the client creation to be outside of the handler block altogether.

1
2
3
4
5
6
const dbClient = DynamoDbDocumentClient.from(new DynamoDBClient());

export const handler = (event) => {
  // logic to parse event
  // logic to insert event
}
Wait, wait. How does this solve the problem? You're still going to be executing this code for every message, so won't you have the same issue?

Now that's a great question! Something that the team learned is that when a Lambda gets spun-up, there's a context that's created that hosts the external dependencies. When a lambda execution finishes, the context is maintained by AWS for a certain amount of time to be reused in case the lambda is invoked again. This saves on the init/start-up times.

Because of the shared context, this allows us to essentially pool the connections and drastically reduce the amount of connections needed.

This same advice is given in the best practices documentation for lambdas.

Lessons Learned

After making the code change and redeploying, we were able to confirm that everything was working again with no issues.

Even though the problem was new to us, this was a great opportunity to learn more about how Lambdas work under the hood, understand more about execution context, and a bit of dive into troubleshooting unknown errors for the team.

Today I Learned: LibYear

When writing software, it's difficult (if not impossible) to write everything from scratch. We're either using a framework or various third-party libraries to make our code work.

On one hand, this is a major win for the community because we're not all having to solve the same problem over and over again. Could you imagine having to implement your own authentication framework (on second thought...)

However, this power comes at a cost. These libraries aren't free as in beer, but more like puppies. So if we're going to take in the library, then we need to make sure that our dependencies are up-to-date with the latest and greatest. There are new features, bug fixes, and security patches occurring all the time and the longer we let a library drift, the more painful it can be to upgrade.

If the library is leveraging semantic versioning, then we can take a guess on the likelihood of a breaking change base on which number (Major.Minor.Maintenance) has changed.

  • Major - We've made a breaking change that's not backward compatible. Your code may not work anymore.
  • Minor - We've added added new features that you might be interested in or made other changes that are backwards compatible.
  • Maintenance - We've fixed some bugs, sorry about that!

Keeping Up With Dependencies

For libraries that have known vulnerabilities, you can leverage tools like GitHub's Dependabot to auto-create pull requests that will upgrade those dependencies for you. Even though the tool might be "noisy", this is a great way to take an active role in keeping libraries up to date.

However, this approach only works for vulnerabilities, what about libraries that are just out-of-date? There's a cost/benefit of upgrading where the longer you go between the upgrades, the riskier the upgrade will be.

In the JavaScript world, we know that dependencies are listed in the package.json file with minimum versions and the package-lock.json file states the exact versions to use.

Using LibYear

I was working with one of my colleagues the other day and he referred me to a library called LibYear that will check your package.json and lock file to determine how much "drift" that you have between your dependencies.

Under the hood, it's combining the npm outdated and npm view <package> commands to determine the drift.

What I like about this tool is that you can use this as a part of a "software health" report for your codebase.

As engineers, we get pressure to ship features and hit delivery dates, but it's our responsibility to make sure that our codebase is in good shape (however we defined that term). I think this library is a good way for us to capture a data point about software health which then allows the team to make a decision on whether we should update our libraries now (or defer).

The nice thing about the LibYear package is that it lends itself to be ran in a pipeline and then you could take those results and post them somewhere. For example, maybe you could write your own automation bot that could post the stats in your Slack or Teams chat.

It looks like there's already a GitHub Action for running this tool today, so you could start there as a basis.

Learning From Failures: Leveraging Postmortems for Good

As engineers, we love solving problems and digging into the details. I know that I feel a particular sense of joy when shipping a system to production for the first time.

However, once a system goes into production, it's going to fail. That's not a knock on engineering, that's a fact of our industry. We cannot build a system with 100% uptime, no matter how much we plan ahead or think about.

When this happens, our job is to fix the issue and bring the systems back up.

A common mistake that I see teams make is that they'll fix the problem, but never dig into the why it happened. Fast forward three months, and the team will run into the same problem, making the same mistakes. I'm always surprised when teams don't share their knowledge because if you've paid the tax of learning from the first outage, why would you pay the same tax to learn the lesson again?

This is where having a postmortem meeting comes into play. Borrowing from medicine, a postmortem is performed when the team gets together to analyze the outage and what could have been done differently to prevent it from happening. Other industries have similar mechanisms (for example, professional athletes review their games to learn where they made mistakes so that they can train differently).

One of the key concepts of the postmortem is that the goal isn't to assign blame, but to understand what happened and why. People aren't perfect and it's not reasonable for them to be. This concept is so fundamental that another term you might here for a postmortem is a blameless incident report (BIR).

Are you interesting learning how to run your own BIR process? Drop me a line and if there's enough interest, I'll write a follow-up post!

Getting Started

Every company has their own process when they have an outage, but health process should be able to answer these five questions at a minimum.

  1. What stopped working?
  2. Why did it stop working?
  3. How did we fix it?
  4. What led to the system breaking?
  5. What are we doing to prevent this issue from happening in the future?

Organizations may have additional questions for their process, for example

  • What was the impact? (X customers were impacted for Y minutes)
  • How was this discovered? (Was it user reported, support found it, our monitoring tools paged on-call)

You can always make a process more complicated, but it's hard to simplify a process, so my recommendation is start with the 5 questions and then expand as your team evolves and matures.

What Stopped Working?

For there to be an outage, something had to stop working, right? So what was it? It's okay/normal to be a bit technical here, however, don't forget that this outage caused an impact to our users, so we should strive to explain the outage in those terms.

Another way to think about this is "What stopped working and what impact did it have for our users?"

Here's a not-great answer to the question

The Data Sync Java container stopped working.

While this does capture what stopped working, the details are quite vague. For example, did it not start? Did it start crashing? Was it the whole process that failed or only one part?

In addition, there's not a clear delineation of what the user impact was. For example, could users not log into the application at all? Were there certain parts of the application that stopped working?

We can improve this by including a bit more details in what specifically stopped working.

The Data Sync process was failing to connect to the UserHistory database.

Ah, okay, now we know that the sync process wasn't able to connect to a specific database and we can start getting a sense of why that would be a big deal. We're still not including the user impact, so let's add that bit of detail in.

The Data Sync process was failing to connect to the UserHistory database. As such, when users logged into their account, they could not see the latest transactions for their account.

Much better, now I know that our users couldn't see recent transactions and it was something to do with the Data Sync process.

As a side benefit, if I'm a new engineer to this codebase or team, I know know more about the architecture and that this process is involved for when users start transactions.

Why Did It Stop Working?

At some point, the system was working and if it's no longer working, something had to have happened, so what was it? Was there a code deployment to production? Did a feature flag get toggled that had an adverse effect? What about infrastructure changes like DNS entries or firewall?

This is a key critical step because if we don't know why it stopped working, then we don't have a good spot to start when we start diving into the circumstances behind what led to the outage.

This doesn't have to be a page worth of technical deep dive, a couple of sentences can suffice here. In our outage, the issue was due to a port being blocked by the firewall (where it wasn't before).

There was an infrastructure change for the database that blocked port 1433, which is the default port for the database. Because of this change, no application was able to successfully connect to the database.

How Did We Fix It?

If you've gotten to the point of writing the BIR, then you've fixed the issue and the system is up and running again, right? So what did you do to fix the issue? Did you rollback the deployment? Disable a feature flag? Burn down the application, change your name and get a new job? This is a cool part of the BIR because you're leveling up others that if they run into a similar issue, here's how you were able to get back up and running.

In our example, we were able to unblock the port, so we can answer this question with:

Once we realized that port 1433 was being blocked by the firewall, we worked with the Infrastructure team to unblock the port. Once that change was completed, the Data Sync service was able to start syncing data to the UserHistory database.

What Led To the System Breaking?

This is where the meat of the conversation should take place. In a healthy organization, we assume that people are wanting to do the right thing (if not, you have a much bigger problem that BIRs). So we've got to figure out how did we get here, what were the motivations and why did we do the things that we did?

One common approach is 5 Whys, made popular by the Toyota Production System. The idea is that we keep digging into why something happened and not stop at symptoms.

An example 5 Whys breakdown for this outage could be the following:

- Why did the Data Sync service started failing to connect to the UserHistory database?
    - Because the port that the Data Sync service was communicating with got blocked
    - Why did the port become blocked?
        - One of our security initiatives is to block default ports to lessen the changes of someone gaining access to our systems
        - Why is this an initiative?
            - Our current firewall solution doesn't support a way to have an _allowList_ of dynamic IP addresses. Since most vulnerability tools scan a network, they'll typically use default ports to see if there's a service running there and if so, try to compromise it.
            - Why doesn't our current firewall solution have support for dynamic IP addresses?
    - Why did we not see this issue in the lower environments?
        - The lower environments are not configured the same as our production environment
        - Why are these environments different?
            - Given that lowers receive less traffic than production, we have multiple databases installed on the same server, none of which are on the default port. By doing this, we're saving money on infrastructure costs.
        - Why did the team not realize that the lowers are configured differently?
            - The Data Sync process is an older part of our application that most of the team doesn't have knowledge of.
    - Why did our monitoring tools not catch the issue after deployment?
        - For the Data Sync process, we currently only have a health check, which only checks to see if the app is up, it doesn't check that it has line-of-sight to all its dependencies.
        - Why doesn't the health check include dependency checking?
            - Health checks are used to tell our cloud infrastructure to restart a service. Since restarting the service wouldn't have resolved the problem, that's why we don't have it included in the checks
        - Why don't we have other checks?
            - The Data Sync process predates our existing monitoring solutions and has been stable, so the work was never prioritized.

As you can see, this approach brings up lots of questions, including the motivation behind the work and why the team was doing it anyway. It is possible

What Are We Doing To Prevent Similar Issues in the Future?

The system is going to have an outage, that's not up for debate. However, it would be foolish to have an outage and not do anything to prevent it from happening in the future. If we already paid the learning tax once, let's not pay it again for the same issue.

This should be a concrete list of action items that the team takes ownership of. In some cases, it's work that they can do to prevent the issue going forward. In some cases, it could be working with others to help them fix things in their process.

In our hypothetical outage, we could have the following action items

  • Add Additional Monitoring for the Data Sync process
  • Work with Security to determine approach for securing our database instance
  • Create environment diagram for Data Sync process
  • Create architecture diagram for Data Sync process

Example Blameless Incident Report

In this post, we covered the 5 main questions to answer for a BIR and what good responses look like. In this section, I wanted to go over an example BIR for the database issue that we've been exploring. As you'll see, it's not a verbose document, however, it does capture the main points and this is easily consumable for other teams to learn from our mistakes.

# Title: Users Unable To See Latest Transactions

##  What Stopped Working?
The Data Sync process stopped being able to connect to the UserHistory Database. Because of this failure, when users logged into their account to see transactions, they were not able to see any new transactions.

## Why Did It Stop Working?

A change was made to the database infrastructure to block port 1433. This is the default port for a SQL Server database so when it was blocked, no application was able to communicate with the database.

## How Did We Fix It?

The firewall port change was reverted.

## What Led to the System Breaking?

To improve our security posture, we wanted to block default ports to our systems so that if someone was to gain access, they couldn't "guess" into the connection for the database.

When making these firewall changes, we start in the lower environments so that if there is a problem, we impact dev or staging and not production.

Unknown to the team, in the lower environments, we have multiple databases installed on a server, none of which are on port 1433. Because of this, we had false confidence that our changes were safe to deploy forward.

In production, each database has their own server, running on port 1433.

##  What Are We Doing To Prevent Issues In The Future?

- **Check the environment differences** - Before making infrastructure changes, we're going to check what the differences are in our lower environments vs production.
- **Create architecture diagram** - Since one of the main issues is that the team didn't understand the architecture of the Data Sync process, we're going to create an architecture diagram that covers the flow of the service.
- **Create environment diagram** - To better understand our system, we're going to create an environment diagram that covers the databases at play and how the Data Sync process communicates.
- **Work with Security on Approaches for Securing Database** - We'll work with the Security team to either setup a way to have dynamic IPs work with our firewall technology or to change our Data Sync process to have a static IP.

How Using Vertical Slicing Can Minimize Dependencies and Deliver Value Faster

How do we break down this work?

It's a good question and it can help set the tone for the project. Assuming the work is more than a bug fix, it's natural to look at a big project and break it down to smaller, more approachable pieces.

Depending on how you break down the work, you can dramatically change the timeline from when you can get feedback from your users and find issues much sooner.

In this post, let's look at a team breaking down a new feature for their popular application, TakeItEasy.

A New Day - A New Feature

It's a new sprint and your team is tackling a highly requested feature for TakeItEasy, the ability to setup a User Profile. Everyone is clear on the business requirements as we need the ability to save and retrieve the following information so that we can personalize the application experience for the logged in user:

  • Display Name
  • Name
  • Email Address
  • Profile Picture

Going over the high level design with the engineers, it's discovered that there's not a way to save this data right now. In addition, we don't have a way to display this data for the user to enter or modify.

Breaking Work Down as Horizontal Layers

Working with the team, the feature gets broken down as the following stories:

  • Create the data storage
  • Create the data access layer
  • Create the User Profile screen

Once these stories are done, this feature is done and that seems easy enough. As you talk with the team though, a few things stand-out to you.

  1. None of these stories are fully independent of each other. You can build out the User Profile screen, but without the Data Access Layer, it's incomplete. Same thing with the data access layer, it can't be fully complete until the data storage story is done.

  2. It's difficult to demo the majority of the stories. Stakeholders don't care about data storage or the data access layer, but they do care about the user setting up their profile. With the current approach, it's not possible to demo any work until all three stories are done.

As you approach each story, they seem to be quite large:

  1. For the Data Storage work, it's an upgrade script to modify the Users table with nullable columns.
  2. For the data access story, it's updating logic to retrieve each of the additional fields and making sure to handle missing values from the database.
  3. For the User Profile screen, it's creating a new page, update the routing, and adding multiple controls with validation logic for each of the new fields.

Is there a different way we can approach this work such that we can deliver something useful sooner?

Breaking Down the Work as Vertical Slices

The main issue with the above approach is that there's a story for each layer of the application (data, business rules, and user interface) and each of these layers are dependent upon each other. However, no one cares about any single layer, they care about all of it working together.

Two People Eating Nachos
Seriously, could you imagine enjoying a plate of nachos by first eating all the chips, then the beans, then the salsa?
Photo by Herson Rodriguez on Unsplash

One way to solve this problem would be to have a single story Implement User Profile that has all this work, but that sounds like more than a sprints worth of work. We know that that the more work in a story, the harder it is to give a fair estimate for what's needed.

Another approach to solve this problem is by changing the way we slice the work by taking a bit of each layer into a story. This means that we'll have a little bit of database, little bit of data access, and little bit of the user interface.

If we take this approach, we would have the following stories for our User Profile feature.

Feature: Implement User Profile

  • Story: Implement Display Name
  • Story: Implement Name
  • Story: Implement Email
  • Story: Implement Profile Picture

Each story would have the following tasks:

  • Add storage for field
  • Update data access to store/retrieve field
  • Update interface with control and validation logic

There are quite a few advantages with this approach.

First, instead of waiting for all the stories to get done before you can demo any functionality, you can demo after getting one story completed. This is huge advantage because if things are looking well, you could could potentially go live with one story instead of waiting for all three stories from before.

Second, these stories are independent of each other as the work to Implement Display Name doesn't depend on anything from Implement Email. This increases the autonomy of the team and allows us to shift priorities easier as at the end of any one story, we can pick any of the remaining stories.

For example, let's say that after talking more with customers, we need a way for them to add their favorite dessert. Instead of the business bringing in the new requirement and pushing back the timeline, engineering can work on that functionality next and get that shipped sooner.

Third, it's much easier to explain to engineers and stakeholders for when a certain piece of functionality will be available. Going back to horizontal layering, it's not clear when a user would be able to set-up their email address. Now, it's clear when that work is coming up.

Why The Horizontally Slicing?

I'm going to let you on a little secret. Most engineers are technically strong, but can be ignorant of the business domain that they're solving in. Unless you're taking time to coach them on the business (or if they've been working in the domain for a long period of time), engineers just don't know the business.

As such, it's difficult for engineers to speak in the ubiquitous language of the business, it's much easier to speak in the technical details. This, in turn, leads to user stories that are more technical details in nature (modify table, build service, update pipeline) instead of user focused (can set display name, can set email address).

If you're an Engineer, you need to learn the business domain that you're working in. This will help you prevent problems from happening in your software because it literally can't do that. In addition, this will help you see the bigger picture and build empathy with your users as you understand them better.

If you're in Product or Business, you need to work with your team to level up their business domain. This can be done by having them use the product like a user, giving them example tasks, and spending time to talk about the domain. If you can get the engineers to be hands-on, every hour you invest here is going to pay huge dividends when it comes time to pick up the next feature.

Wrapping Up

The next time you and the team have a feature, try experimenting with vertically slicing your stories and see how that changes the dynamics of the team.

To get started, remember, focus on the user outcomes and make sure that each story can stand independently of one another.

If this post resonated with you, I'd like to hear from you! Feel free to drop me a line at CoachingCorner@TheSoftwareMentor.com!

Five Tips to Have More Effective Meetings

As a leader it's inevitable that you will have to organize a meeting. Whether it's for updates, 1-1s, or making decisions, the team is looking towards you to lead the conversation and have it be a good use of time.

But how do you have a good meeting? That's not something that's covered in leadership training. Is it the perfect invite? A well honed pitch? Throw something out there and see if it sticks?

Like anything else, a good meeting needs some preparation, however, if you follow these five tips, I guarantee your meetings will be better than before.

Step 1: Does It Even Need to Be a Meeting?

The best kind of meeting is the one that didn't have to happen. Have you ever sat through a meeting where everyone did a bunch of talking, you halfway listened and thought to yourself, "this could have been an email?"

man looking out window
Photo by BRUNO CERVERA on Unsplash

Been there, done that have, and have the t-shirt.

When I think about why we need meetings, it's because we're trying to accomplish something that one person alone couldn't get done. With this assumption in mind, I find that meetings take one of two shapes: sharing information (e.g., stand-ups, retrospectives, all-hands) or to make a decision (e.g., technical approach, ironing the business rules).

Depending on what you're trying to accomplish, then next thought is determine if the communication needs to be synchronous (get everyone together) or asynchronous (let people get involved at their own pace).

For example, if the team has been struggling in getting work done, then it makes sense to have a meeting to figure out what's happening and ensure that everyone is hearing the exact wording/tone of the messaging.

On the other hand, if your intent is to let the team know that Friday is a holiday, then that can be done through email or message in your chat tool.

One way you can figure out if the meeting could have been an email is to pretend it was a meeting and you canceled it. Is there anything that can't proceed? If not, then maybe you don't need that meeting.

Step 2: How Do We Even Know If We're Successful?

Have you ever attended a meeting and didn't know what it was about or why you met? These types of meetings typically suffer from not having a goal or purpose behind the meeting.

Recall from Step #1, we're meeting because there's something that we need from the group that we couldn't do as individuals. So what is it?

When scheduling the meeting, include the purpose (here's why we're meeting) and the goal (here's how we know if we're done) to the description. Not only is this a great way to focus the meeting, it can also serve as a way for people to know if they need to attend or not.

dartboard with darts in it
Photo by Afif Ramdhasuma on Unsplash

This is also a good litmus test to see if you know why there should be a meeting as this forces you to think about the problem being solved and how it should happen. If you're struggling to determine the purpose and the goal, then you're attendees will also struggle.

Step 3: Do You Have The Right People?

A common mistake I see people make is that they invite everyone who has a stake or passing interest in the topic which can make for a large (10+ people) meeting.

Even though the intent is good (give everyone visibility), this is a waste because the more people you have in a meeting, the less effective it will be. A meeting with four people will have a better conversation and get things done more than a meeting with nine people.

Let's pretend that you're at a large party and you see a group that you know, so you walk up to the group, hoping to break into the conversation.

As more people join in the group, they're going to naturally split up into smaller groups, each with their own conversations. The main reason is that the large the group, the less likely you have a chance to participate and get involved. So you might start a conversation with 1 or 2, split off and then start a new group.

Meetings have the same problem. The large the group, the more likely that side conversations will happen and it makes it harder for you to facilitate and keep everyone on track.

To keep meetings effective, be sure to only include the necessary people. For example, instead of inviting an entire team, invite only 1 or 2 people.

At a high level, you need the these three roles filled to have a successful meeting

  1. The Shot Caller - This is the main stakeholder and can approve our decisions. Without their buy-in, no real decision can be made.
  2. The Brain Trust - These are the people who have the details and can drive the conversation. You want to keep this group as tightly focused as possible.
  3. The Facilitator - Generally the organizer, this is the person who ensures that the goal is achieved and keeps the meeting running.

One way to narrow down the invite list is to answer this question:

If this person can't make the meeting, then we can't meet.

If you can't accomplish the goal without them, then they need to be there. I'm such a believer in this advice that if it's the day of the meeting and we don't have the Shot Caller or the Brain Trust, then I'll reschedule the meting as I'd rather move it than waste everyone's time.

Woman presenting task board in front of team
Photo by Jason Goodman on Unsplash

Step 4: Running the Meeting

It's the big day and you've got everyone in the room, now what?

In Step #2, we talked about having a purpose and goal for the meeting. Now is when we vocalize these two things to kick the meeting off. From there, we can seed the conversation with one of these strategies:

  • Asking an opening question to prime the Brain Trust.
  • Throwing to the Shot Caller to frame any restrictions the attendees need to be aware of.
  • Start with a specific person to kick the conversation off.

Once the conversation starts flowing, your job is to keep the meeting on track. For those who've played games like Dungeons and Dragons, you're acting like a Game Master where you know the direction the meeting needs to go to (The Goal), but the attendees are responsible for getting there.

It can be challenging to keep the meeting on track if you're also driving the conversation, so pace yourself, take notes, and get others involved to keep the conversation going.

When leading longer meetings (more than 60 minutes), make sure to take a 10 minute break.

For attendees, this allows them to stretch their legs, take a bathroom break, and to stew on the conversation that's happened so far. For those who are "thinkers" than "reacters", this gives them time to compose their thoughts and have better conversations after the break.

As a facilitator, this gives you a way to think about the meeting so far, identify areas that the group needs to dig into, and if needed, it can break the conversation out of a rut.

Step 5: Wrapping Up - How Do Things Get Done?

As the meeting comes to a close, we need to make sure that action follows next. A meeting with no follow-up is a lot like a rocking chair. Plenty of motion, but no progress being made.

In order to make sure that next steps happen, make sure to define action items with attendees owning getting them done. Action items don't have to be complex, it could be as simple as:

  • Defining stories for the team
  • Sending summary notes to other stakeholder
  • Following up with Person about X.

When defining action items, be wary of items that are scheduling another meeting (e.g. let's schedule a meeting with Team Y to get their perspective). This implies that you didn't have the right people in the room (see Step 3). Also, remember, meetings are to get alignment or to come up with a solution, so what purpose does this follow up meeting have?

As the meeting wraps up, take a few moments to summarize the outcome, verbally ensure that actions items have been assigned and thank everyone for their attention and time.

Congratulations, You're an Expert With Meetings Now, Right?

Running effective meetings can be made easier if you take the time to do the necessary preparation. Even those these steps may seem heavy on the documentation, you'll find that it'll help you focus on the core problem at hand, which helps focus the group, which makes everyone that much better.

By following these five steps, you'll increase your chances of having a great meeting and as you gain more experience, you'll become more comfortable running them.

Scaling Effectiveness with Docs - Finding Stale Docs

In a previous post, I argued that to help your team be effective, you need to have up-to-date docs, and to have this happen, you need some way of flagging stale documentation.

In this series, I show you how you can automate this process by creating a seed script, a check script, and then automating the check script. In today's post, let's develop the check script.

Breaking Down the Check Script

At a high level, our script will need to perform the following steps:

  1. Specify the location to search.
  2. Find all the markdown files in directory.
  3. Get the "Last Reviewed" line of text.
  4. Check if the date is more than 90 days in the past.
  5. If so, print the file to the screen.

Specifying Location

Our script is going to search over our repository, however, I don't want our script to be responsible for cloning and cleaning up those files. Since the long term plan is for our script to run through GitHub Actions, we can have the pipeline be responsible for cloning the repo.

This means that our script will have to be told where to search and since it can't take in manual input, we're going to use an environment variable to tell the script where to search.

First, let's create a .env file that will store the path of the repository:

.env
REPO_DIRECTORY="ABSOLUTE PATH GOES HERE"

From there, we can start working on our script to have it use this environment variable.

index.ts
import { load } from "https://deno.land/std@0.195.0/dotenv/mod.ts";

await load({ export: true }); // this loads the env file into our environment

const directory = Deno.env.get("REPO_DIRECTORY");

if (!directory) {
  console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");
  Deno.exit();
}
console.log(directory);

If we were to run our Deno script with the following command deno run --allow-read --allow-env ./index.ts, we should see the environment variable getting logged.

Finding all the Markdown Files

Now that we have a directory, we need a way to get all the markdown files from that location.

Doing some digging, I didn't find a built-in library for doing this, but building our own isn't too terrible.

By using Deno.readDir/Sync, we can get all the entries in the specified directory.

From here, we can then recurse into the other folders and get their markdown files as well.

Let's create a new file, utility.ts and add a new function, getMarkdownFilesFromDirectory

utility.ts
export function getMarkdownFilesFromDirectory(directory: string): string[] {
  // let's get all the files from the directory
  const allEntries: Deno.DirEntry[] = Array.from(Deno.readDirSync(directory));

  // Get all the markdown files in the current directory
  const markdownFiles = allEntries.filter(
    (x) => x.isFile && x.name.endsWith(".md")
  );
  // Find all the folders in the directory
  const folders = allEntries.filter(
    (x) => x.isDirectory && !x.name.startsWith(".")
  );
  // Recurse into each folder and get their markdown files
  const subFiles = folders.flatMap((x) =>
    getMarkdownFilesFromDirectory(`${directory}/${x.name}`)
  );
  // Return the markdown files in the current directory and the markdown files in the children directories
  return markdownFiles.map((x) => `${directory}/${x.name}`).concat(subFiles);
}

With this function in place, we can update our index.ts script to be the following:

index.ts
import { load } from "https://deno.land/std@0.195.0/dotenv/mod.ts";
import { getMarkdownFilesFromDirectory } from "./utility.ts";

const directory = Deno.env.get("REPO_DIRECTORY");

if (!directory) {
  console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");
  Deno.exit();
}

const files = getMarkdownFilesFromDirectory(directory);
console.log(files);

Running the script with deno run --allow-read --allow-env ./index.ts, should get a list of all the markdown files being printed to the screen.

Getting the Last Reviewed Text

Now that we have each file, we need a way to get their last line of text.

Using Deno.readTextFile/Sync, we can get the file contents. From there, we can convert them to lines and then find the latest occurrence of Last Reviewed

Let's add a new function, getLastReviewedLine to the utility.ts file.

utility.ts
export function getLastReviewedLine(fullPath: string): string {
  // Get the contents of the file, removing extra whitespace and blank lines
  const fileContent = Deno.readTextFileSync(fullPath).trim();

  // Convert block of text to a array of strings
  const lines = fileContent.split("\n");

  // Find the last line that starts with Last Reviewed On
  const lastReviewed = lines.findLast((x) => x.startsWith("Last Reviewed On"));

  // If we found it, return the line, otherwise, return an empty string
  return lastReviewed ?? "";
}

Let's try this function out by modifying our index.ts file to display files that don't have a Last Reviewed On line.

index.ts
import { load } from "https://deno.land/std@0.195.0/dotenv/mod.ts";
import {
  getMarkdownFilesFromDirectory,
  getLastReviewedLine,
} from "./utility.ts";

const directory = Deno.env.get("REPO_DIRECTORY");

if (!directory) {
  console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");
  Deno.exit();
}

const files = getMarkdownFilesFromDirectory(directory);
files
  .filter((x) => getLastReviewedLine(x) !== "")
  .forEach((s) => console.log(s)); // print them to the screen

Determining If A Page Is Stale

At this point, we can get the "Last Reviewed On" line from a file, but we've got some more business rules to implement.

  • If there's a Last Reviewed On line, but there's no date, then the files needs to be reviewed
  • If there's a Last Reviewed On line, but the date is invalid, then the file needs to be reviewed
  • If there's a Last Reviewed On line, and the date is more than 90 days old, then the file needs to be reviewed.
  • Otherwise, the file doesn't need review.

We know from our filter logic that we're only going to be looking at lines that start with "Last Reviewed On", so now we need to extract the date.

Since we assume our format is Last Reviewed On, we can use substring to get the rest of the line. We're also going to assume that the date will be in YYYY/MM/DD format.

utility.ts
export function doesFileNeedReview(line: string): boolean {
  if (!line.startsWith("Last Reviewed On: ")) {
    return true;
  }
  const date = line.replace("Last Reviewed On: ", "").trim();
  const parsedDate = new Date(Date.parse(date));
  if (!parsedDate) {
    return true;
  }

  // We could something like DayJS, but trying to keep libraries to a minimum, we can do the following
  const cutOffDate = new Date(new Date().setDate(new Date().getDate() - 90));

  return parsedDate < cutOffDate;
}

Let's update our index.ts file to use the new function.

index.ts
import { load } from "https://deno.land/std@0.195.0/dotenv/mod.ts";
import {
  getMarkdownFilesFromDirectory,
  getLastReviewedLine,
} from "./utility.ts";

const directory = Deno.env.get("REPO_DIRECTORY");

if (!directory) {
  console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");
  Deno.exit();
}

getMarkdownFilesFromDirectory(directory)
  .filter((x) => getLastReviewedLine(x) !== "")
  .filter((x) => doesFileNeedReview(x))
  .forEach((s) => console.log(s)); // print them to the screen

And just like that, we're able to print stale docs to the screen. At this point, you could create a scheduled batch job and start using this script.

However, if you wanted to share this with others (and have this run not on your box), then stay tuned for the final post in this series where we put this into a GitHub Action and post a message to Slack!

Scaling Effectiveness with Docs - Seeding Dates

In a previous article, I argued that to help your team be effective, you need to have up-to-date docs, and to have this happen, you need some way of flagging stale documentation.

This process lends itself to being easily automated, so in this series of posts, we'll build out the necessary scripts to check for docs that haven't been reviewed in the last 90 days.

All code used in this post can be found on my GitHub.

Approach

To make this happen, we'll need to create the following:

  1. A seed script that will add a Last Reviewed Date to all of our pages.
  2. A check script that will check files for the Last Reviewed Date, returning which ones are either missing a date or are older than 90 days.
  3. Create a scheduled job using GitHub Actions to run our check script and post a message to our Slack channel.

For this post, we'll be creating the seed script.

Breaking Down the Seed Script

For this script to work, we need to be able to do two things:

  1. Determine the last commit date for a file.
  2. Add text to the end of the file.
  3. Getting a list of files in a directory.

To determine the last commit date for a file, we can leverage git and its log command (more on this in a moment). Since we're mainly doing file manipulation, we could use Deno here, but it makes much more sense to me to use something like bash or PowerShell.

Determining the Last Commit Date For a File

To make this automation work, we need to have a date for the Last Reviewed On footer. You don't want to set all the files to the same date because all the files will come up for review in one big batch.

So, you're going to want to stagger the dates. You can do this by generating random dates, but honestly, getting the last commit date should be "good" enough.

To do this, we can take advantage of git's log command with the --pretty flag.

We can test this out by using the following script.

1
2
3
4
5
file=YourFileHere.md
commitDate=$(git log -n 1 --pretty=format:%aI -- $file)
# formatting date to YYYY/MM/DD
formattedDate=$(date -d "$commitDate" "+%Y/%m/%d")
echo $formattedDate

Assuming the file has been checked into Git, we should get the date back in a YYYY/MM/DD format. Success!

Adding Text to End of File

Now that we have a way to get the date, we need to add some text to the end of the file. Since we're working in markdown, we can use --- to denote a footer and then place our text.

Since we're going to be appending multiple lines of text, we can use the cat command with here-docs.

1
2
3
4
5
6
7
8
9
file=YourFileHere.md
# Note the blank lines, this is to make sure that the footer is separated from the text in the file
# Note: The closing EOF has to be on its own line with no whitespace or other characters in front of it.
cat << EOF >> $file


---
Last Reviewed On: 2023/08/12
EOF

After running this script, we'll see that the file has appended blank lines and our new footer.

Combining Into a New Script

Now that we have both of these steps figured out, we can combine them into a single script like the following:

file=YourFileHere.md
commitDate=$(git log -n 1 --pretty=format:%aI -- $file)
# formatting date to YYYY/MM/DD
formattedDate=$(date -d "$commitDate" "+%Y/%m/%d")
# Note the blank lines, this is to make sure that the footer is separated from the text in the file
# Note: The closing EOF has to be on its own line with no whitespace or other characters in front of it.
cat << EOF >> $file


---
Last Reviewed On: $formattedDate
EOF

Nice! Given a file, we can figure out its last commit date and append it to the file. Let's make this more powerful by not having to hardcode a file name.

Finding Files In a Directory

At this point, we can update a file, but the file is hardcoded. But we're going to have a lot of docs to review, and we don't want to do this manually, so let's figure out how we can get all the markdown files in a directory.

For this exercise, we can use the find command. In our case, we need to find all the files with a .md extension, no matter what directory they're in.

directory=DirectoryPathGoesHere
find $directory -name "*.md" -type f

We're going to need to process each of these files, so some type of iteration would be helpful. Doing some digging, Bash supports a for loop, so let's use that.

1
2
3
4
directory=DirectoryPathGoesHere
for file in `find $directory -name "*.md" -type f`; do
  echo "printing $file"
done

If everything works, we should see each markdown file name being printed.

When a Plan Comes Together

We've got all the pieces, so let's bring this together:

directory=DirectoryPathGoesHere
for file in `find $directory -name "*.md" -type f`; do
  commitDate=$(git log -n 1 --pretty=format:%aI -- $file)
  # formatting date to YYYY/MM/DD
  formattedDate=$(date -d "$commitDate" "+%Y/%m/%d")
  # Note the blank lines, this is to make sure that the footer is separated from the text in the file
  # Note: The closing EOF has to be on its own line with no whitespace or other characters in front of it.
  cat << EOF >> $file


---
Last Reviewed On: $formattedDate
EOF
done

Bells and Whistles

This script works and we could ship this, however, it's a bit rough.

For example, the script assumes that it's in the same directory as your git repository. It also assumes that your repository is up-to-date and that it's safe to make changes on the current branch.

Let's make our script a bit more durable by making the following tweaks:

  1. Clone the repo to a new temp directory.
  2. Create a new branch for making changes.
  3. Commit changes and publish the branch.

Getting the latest version of the repo

For this step, let's add logic for creating a new temp directory and adding a call to git clone.

1
2
3
4
5
6
7
8
# see https://unix.stackexchange.com/questions/30091/fix-or-alternative-for-mktemp-in-os-x#answer-84980
# for why tmpDir is being created this way
docRepo="RepoUrlGoesHere"
tmpDir=$(mktemp -d 2>/dev/null || mktemp -d -t 'docSeed')
cd $tmpDir
echo "Cloning from $docRepo"
# Note the . here, this allows us to clone to the temp folder and not to a new folder of repo name
git clone "$docRepo" . &> /dev/null

Making a new branch and pushing changes

Now that we've got the repo, we can add the steps for switching branches, committing changes, and publishing the branch.

1
2
3
4
5
6
# ... code to clone repository
git switch -c 'adding-seed-dates'
# ... code to make file changes
git add --all
git commit -m "Adding seed dates"
git push -u origin adding-seed-dates

Final Script

Let's take a look at our final script:

#!/bin/bash
docRepo="RepoUrlGoesHere"
tmpDir=$(mktemp -d 2>/dev/null || mktemp -d -t 'docSeed')
cd $tmpDir

echo "Cloning from $docRepo"
git clone "$docRepo" . &> /dev/null

git switch -c 'adding-dates-to-files'

for file in `(find . -name "*.md" -type f)`; do
  echo "updating $file"
  commitDate="$(git log -n 1 --pretty=format:%aI -- $file)"
  formattedDate=$(date -d $commitDate "+%Y/%m/%d")
  cat << EOT >> $file


---
Last Reviewed On: $formattedDate
EOT
done
git add --all
git commit -m "Adding initial dates"
git push -u origin adding-dates-to-files

Wrapping Up

In this post, we wrote a bash script to clone our docs and add a new footer to every page with the file's last commit date. In the next post, we'll build the script that checks for stale files.

Scaling Effectiveness with Docs

As a leader, I'm always looking for ways to help my team to be more efficient. To me, an efficient team is self-sufficient, able to find the information needed to solve their problems.

I've found that having up-to-date documentation is critical for a team because it scales out knowledge in asynchronously, removing the need for manual knowledge transfers.

For example, my team has a wiki that contains information for onboarding into our space, how to complete certain processes (requesting time off, resetting a password), how to run our Agile activities, and our support guidebook. At any point, if someone on the team doesn't know how to do something, they can consult the wiki and find the necessary information.

Docs. Why Did It Have to Be Docs?

I enjoy up-to-date documentation, but the main problem with them is that they captured the state of the world when they were written, but they don't react to changes. If the process for resetting your password changes, the documentation doesn't auto-update. So unless you're spending time reviewing the docs, they'll grow stale and be worthless, or even worse, mislead others to do the wrong things.

A good mental model for documentation is to think of them as a garden. When planted, it looks great, and everyone enjoys the environment. Over time, weeds will grow, and plants will become overgrown, causing the garden to be less attractive. Eventually, people will stop visiting, and the garden will go into disrepair. To prevent this, we must take care of the garden, removing the weeds and trimming the plants.

Outdoor green space

Photo by Robin Wersich via Unsplash.com

Alright, I get it, documentation is important, but my team has commitments, so how do we carve out time to review?

Cameron Learns About Document Control

I started my career in healthcare, and one of my first jobs was writing software for a medical diagnostic device. We were ISO 9001 certified, and the device was considered a Class II from the FDA. Long story short, this meant that we have to provide documentation for our device and software and also show that we were keeping things up to date.

To comply, we would find docs that hadn't been updated in a specific time period (like 90 days) and review them. If everything checked out, we'd bump up the review date. Otherwise, we'd make the necessary changes and revalidate the document.

At the time, all of our files were in Word, so it wasn't the easiest to search them (I recall that we had Outlook reminders, but this was many moons ago).

By baking this into our process, this helped make our work more visible, which in turn, gave us a better idea of the team's capacity for that sprint.

Thankfully, we have better technology than Word for sharing information, so how can we take this approach and bring it up to the modern day?

Modern Take on an Old Classic

First, I think that having your docs in source control is a great idea. If you're using tools like Git, you already have a way of leaving comments and keeping track of approvals through pull requests.

To make the most of Git, you should keep your changes in plaintext as it's easy to see the differences. and I enjoy using Markdown and tools like Mkdocs make this workflow possible.

With this figured out, our next step is to know when the file was last reviewed. We can do that by adding a new line to the bottom of each file, Last Reviewed On: YYYY/MM/DD. To come up with the initial date, we could use the last time the file was modified (thanks git log!).

At this point, we have a way to see the last time the file was reviewed, next step is to write a script that can find files that haven't been reviewed in the last 90 days. At a high level, we'd do the following:

  1. Get the latest for the doc repository.
  2. Get all the markdown files for the repository.
  3. Get the last line of the file.
  4. If the line doesn't start with Last Reviewed On:, we flag it for review as it's never been reviewed.
  5. If the line has a date, but it's older than 90 days, we flag it for review as it might be stale.
  6. Print all flagged files to the screen.

With the script created, we could manually run this on Mondays. But we're technical, right? Why not create a scheduled task to execute this script instead? This removes a manual task to be ran and it gives us visibility on what docs need reviewed.

Wrapping Up

When scaling your knowledge out, having great documentation is necessary as it allows your team to self-serve and work in a more asynchronous manner. The main problem with documentation is that it captures the state of the world when the docs were written, but they don't automatically update when the world changes.

Therefore, we need to have some process to flag and review stale docs. To ensure it gets done, we provide visibility by creating work items and committing to them during the sprint.