Skip to content

2023

Scaling Effectiveness with Docs - Finding Stale Docs

In a previous post, I argued that to help your team be effective, you need to have up-to-date docs, and to have this happen, you need some way of flagging stale documentation.

In this series, I show you how you can automate this process by creating a seed script, a check script, and then automating the check script. In today's post, let's develop the check script.

Breaking Down the Check Script

At a high level, our script will need to perform the following steps:

  1. Specify the location to search.
  2. Find all the markdown files in directory.
  3. Get the "Last Reviewed" line of text.
  4. Check if the date is more than 90 days in the past.
  5. If so, print the file to the screen.

Specifying Location

Our script is going to search over our repository, however, I don't want our script to be responsible for cloning and cleaning up those files. Since the long term plan is for our script to run through GitHub Actions, we can have the pipeline be responsible for cloning the repo.

This means that our script will have to be told where to search and since it can't take in manual input, we're going to use an environment variable to tell the script where to search.

First, let's create a .env file that will store the path of the repository:

.env
REPO_DIRECTORY="ABSOLUTE PATH GOES HERE"

From there, we can start working on our script to have it use this environment variable.

index.ts
import { load } from "https://deno.land/std@0.195.0/dotenv/mod.ts";

await load({ export: true }); // this loads the env file into our environment

const directory = Deno.env.get("REPO_DIRECTORY");

if (!directory) {
  console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");
  Deno.exit();
}
console.log(directory);

If we were to run our Deno script with the following command deno run --allow-read --allow-env ./index.ts, we should see the environment variable getting logged.

Finding all the Markdown Files

Now that we have a directory, we need a way to get all the markdown files from that location.

Doing some digging, I didn't find a built-in library for doing this, but building our own isn't too terrible.

By using Deno.readDir/Sync, we can get all the entries in the specified directory.

From here, we can then recurse into the other folders and get their markdown files as well.

Let's create a new file, utility.ts and add a new function, getMarkdownFilesFromDirectory

utility.ts
export function getMarkdownFilesFromDirectory(directory: string): string[] {
  // let's get all the files from the directory
  const allEntries: Deno.DirEntry[] = Array.from(Deno.readDirSync(directory));

  // Get all the markdown files in the current directory
  const markdownFiles = allEntries.filter(
    (x) => x.isFile && x.name.endsWith(".md")
  );
  // Find all the folders in the directory
  const folders = allEntries.filter(
    (x) => x.isDirectory && !x.name.startsWith(".")
  );
  // Recurse into each folder and get their markdown files
  const subFiles = folders.flatMap((x) =>
    getMarkdownFilesFromDirectory(`${directory}/${x.name}`)
  );
  // Return the markdown files in the current directory and the markdown files in the children directories
  return markdownFiles.map((x) => `${directory}/${x.name}`).concat(subFiles);
}

With this function in place, we can update our index.ts script to be the following:

index.ts
import { load } from "https://deno.land/std@0.195.0/dotenv/mod.ts";
import { getMarkdownFilesFromDirectory } from "./utility.ts";

const directory = Deno.env.get("REPO_DIRECTORY");

if (!directory) {
  console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");
  Deno.exit();
}

const files = getMarkdownFilesFromDirectory(directory);
console.log(files);

Running the script with deno run --allow-read --allow-env ./index.ts, should get a list of all the markdown files being printed to the screen.

Getting the Last Reviewed Text

Now that we have each file, we need a way to get their last line of text.

Using Deno.readTextFile/Sync, we can get the file contents. From there, we can convert them to lines and then find the latest occurrence of Last Reviewed

Let's add a new function, getLastReviewedLine to the utility.ts file.

utility.ts
export function getLastReviewedLine(fullPath: string): string {
  // Get the contents of the file, removing extra whitespace and blank lines
  const fileContent = Deno.readTextFileSync(fullPath).trim();

  // Convert block of text to a array of strings
  const lines = fileContent.split("\n");

  // Find the last line that starts with Last Reviewed On
  const lastReviewed = lines.findLast((x) => x.startsWith("Last Reviewed On"));

  // If we found it, return the line, otherwise, return an empty string
  return lastReviewed ?? "";
}

Let's try this function out by modifying our index.ts file to display files that don't have a Last Reviewed On line.

index.ts
import { load } from "https://deno.land/std@0.195.0/dotenv/mod.ts";
import {
  getMarkdownFilesFromDirectory,
  getLastReviewedLine,
} from "./utility.ts";

const directory = Deno.env.get("REPO_DIRECTORY");

if (!directory) {
  console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");
  Deno.exit();
}

const files = getMarkdownFilesFromDirectory(directory);
files
  .filter((x) => getLastReviewedLine(x) !== "")
  .forEach((s) => console.log(s)); // print them to the screen

Determining If A Page Is Stale

At this point, we can get the "Last Reviewed On" line from a file, but we've got some more business rules to implement.

  • If there's a Last Reviewed On line, but there's no date, then the files needs to be reviewed
  • If there's a Last Reviewed On line, but the date is invalid, then the file needs to be reviewed
  • If there's a Last Reviewed On line, and the date is more than 90 days old, then the file needs to be reviewed.
  • Otherwise, the file doesn't need review.

We know from our filter logic that we're only going to be looking at lines that start with "Last Reviewed On", so now we need to extract the date.

Since we assume our format is Last Reviewed On, we can use substring to get the rest of the line. We're also going to assume that the date will be in YYYY/MM/DD format.

utility.ts
export function doesFileNeedReview(line: string): boolean {
  if (!line.startsWith("Last Reviewed On: ")) {
    return true;
  }
  const date = line.replace("Last Reviewed On: ", "").trim();
  const parsedDate = new Date(Date.parse(date));
  if (!parsedDate) {
    return true;
  }

  // We could something like DayJS, but trying to keep libraries to a minimum, we can do the following
  const cutOffDate = new Date(new Date().setDate(new Date().getDate() - 90));

  return parsedDate < cutOffDate;
}

Let's update our index.ts file to use the new function.

index.ts
import { load } from "https://deno.land/std@0.195.0/dotenv/mod.ts";
import {
  getMarkdownFilesFromDirectory,
  getLastReviewedLine,
} from "./utility.ts";

const directory = Deno.env.get("REPO_DIRECTORY");

if (!directory) {
  console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");
  Deno.exit();
}

getMarkdownFilesFromDirectory(directory)
  .filter((x) => getLastReviewedLine(x) !== "")
  .filter((x) => doesFileNeedReview(x))
  .forEach((s) => console.log(s)); // print them to the screen

And just like that, we're able to print stale docs to the screen. At this point, you could create a scheduled batch job and start using this script.

However, if you wanted to share this with others (and have this run not on your box), then stay tuned for the final post in this series where we put this into a GitHub Action and post a message to Slack!

Scaling Effectiveness with Docs - Seeding Dates

In a previous article, I argued that to help your team be effective, you need to have up-to-date docs, and to have this happen, you need some way of flagging stale documentation.

This process lends itself to being easily automated, so in this series of posts, we'll build out the necessary scripts to check for docs that haven't been reviewed in the last 90 days.

All code used in this post can be found on my GitHub.

Approach

To make this happen, we'll need to create the following:

  1. A seed script that will add a Last Reviewed Date to all of our pages.
  2. A check script that will check files for the Last Reviewed Date, returning which ones are either missing a date or are older than 90 days.
  3. Create a scheduled job using GitHub Actions to run our check script and post a message to our Slack channel.

For this post, we'll be creating the seed script.

Breaking Down the Seed Script

For this script to work, we need to be able to do two things:

  1. Determine the last commit date for a file.
  2. Add text to the end of the file.
  3. Getting a list of files in a directory.

To determine the last commit date for a file, we can leverage git and its log command (more on this in a moment). Since we're mainly doing file manipulation, we could use Deno here, but it makes much more sense to me to use something like bash or PowerShell.

Determining the Last Commit Date For a File

To make this automation work, we need to have a date for the Last Reviewed On footer. You don't want to set all the files to the same date because all the files will come up for review in one big batch.

So, you're going to want to stagger the dates. You can do this by generating random dates, but honestly, getting the last commit date should be "good" enough.

To do this, we can take advantage of git's log command with the --pretty flag.

We can test this out by using the following script.

1
2
3
4
5
file=YourFileHere.md
commitDate=$(git log -n 1 --pretty=format:%aI -- $file)
## formatting date to YYYY/MM/DD
formattedDate=$(date -d "$commitDate" "+%Y/%m/%d")
echo $formattedDate

Assuming the file has been checked into Git, we should get the date back in a YYYY/MM/DD format. Success!

Adding Text to End of File

Now that we have a way to get the date, we need to add some text to the end of the file. Since we're working in markdown, we can use --- to denote a footer and then place our text.

Since we're going to be appending multiple lines of text, we can use the cat command with here-docs.

1
2
3
4
5
6
7
8
9
file=YourFileHere.md
## Note the blank lines, this is to make sure that the footer is separated from the text in the file
## Note: The closing EOF has to be on its own line with no whitespace or other characters in front of it.
cat << EOF >> $file


---
Last Reviewed On: 2023/08/12
EOF

After running this script, we'll see that the file has appended blank lines and our new footer.

Combining Into a New Script

Now that we have both of these steps figured out, we can combine them into a single script like the following:

file=YourFileHere.md
commitDate=$(git log -n 1 --pretty=format:%aI -- $file)
## formatting date to YYYY/MM/DD
formattedDate=$(date -d "$commitDate" "+%Y/%m/%d")
## Note the blank lines, this is to make sure that the footer is separated from the text in the file
## Note: The closing EOF has to be on its own line with no whitespace or other characters in front of it.
cat << EOF >> $file


---
Last Reviewed On: $formattedDate
EOF

Nice! Given a file, we can figure out its last commit date and append it to the file. Let's make this more powerful by not having to hardcode a file name.

Finding Files In a Directory

At this point, we can update a file, but the file is hardcoded. But we're going to have a lot of docs to review, and we don't want to do this manually, so let's figure out how we can get all the markdown files in a directory.

For this exercise, we can use the find command. In our case, we need to find all the files with a .md extension, no matter what directory they're in.

directory=DirectoryPathGoesHere
find $directory -name "*.md" -type f

We're going to need to process each of these files, so some type of iteration would be helpful. Doing some digging, Bash supports a for loop, so let's use that.

1
2
3
4
directory=DirectoryPathGoesHere
for file in `find $directory -name "*.md" -type f`; do
  echo "printing $file"
done

If everything works, we should see each markdown file name being printed.

When a Plan Comes Together

We've got all the pieces, so let's bring this together:

directory=DirectoryPathGoesHere
for file in `find $directory -name "*.md" -type f`; do
  commitDate=$(git log -n 1 --pretty=format:%aI -- $file)
  # formatting date to YYYY/MM/DD
  formattedDate=$(date -d "$commitDate" "+%Y/%m/%d")
  # Note the blank lines, this is to make sure that the footer is separated from the text in the file
  # Note: The closing EOF has to be on its own line with no whitespace or other characters in front of it.
  cat << EOF >> $file


---
Last Reviewed On: $formattedDate
EOF
done

Bells and Whistles

This script works and we could ship this, however, it's a bit rough.

For example, the script assumes that it's in the same directory as your git repository. It also assumes that your repository is up-to-date and that it's safe to make changes on the current branch.

Let's make our script a bit more durable by making the following tweaks:

  1. Clone the repo to a new temp directory.
  2. Create a new branch for making changes.
  3. Commit changes and publish the branch.

Getting the latest version of the repo

For this step, let's add logic for creating a new temp directory and adding a call to git clone.

1
2
3
4
5
6
7
8
## see https://unix.stackexchange.com/questions/30091/fix-or-alternative-for-mktemp-in-os-x#answer-84980
## for why tmpDir is being created this way
docRepo="RepoUrlGoesHere"
tmpDir=$(mktemp -d 2>/dev/null || mktemp -d -t 'docSeed')
cd $tmpDir
echo "Cloning from $docRepo"
## Note the . here, this allows us to clone to the temp folder and not to a new folder of repo name
git clone "$docRepo" . &> /dev/null

Making a new branch and pushing changes

Now that we've got the repo, we can add the steps for switching branches, committing changes, and publishing the branch.

1
2
3
4
5
6
## ... code to clone repository
git switch -c 'adding-seed-dates'
## ... code to make file changes
git add --all
git commit -m "Adding seed dates"
git push -u origin adding-seed-dates

Final Script

Let's take a look at our final script:

##!/bin/bash
docRepo="RepoUrlGoesHere"
tmpDir=$(mktemp -d 2>/dev/null || mktemp -d -t 'docSeed')
cd $tmpDir

echo "Cloning from $docRepo"
git clone "$docRepo" . &> /dev/null

git switch -c 'adding-dates-to-files'

for file in `(find . -name "*.md" -type f)`; do
  echo "updating $file"
  commitDate="$(git log -n 1 --pretty=format:%aI -- $file)"
  formattedDate=$(date -d $commitDate "+%Y/%m/%d")
  cat << EOT >> $file


---
Last Reviewed On: $formattedDate
EOT
done
git add --all
git commit -m "Adding initial dates"
git push -u origin adding-dates-to-files

Wrapping Up

In this post, we wrote a bash script to clone our docs and add a new footer to every page with the file's last commit date. In the next post, we'll build the script that checks for stale files.

Coaching Corner Volume 4

Welcome to Cameron's Coaching Corner, where we answer questions from readers about leadership, career, and software engineering.

In this post, we'll look at a question posed by Bastien in the Engineering Manager's Slack Group on how to praise your team.

Context: New to Engineering Manager, managing 5 people and working in a 5 person team. My managees are not 100% on my team.

Details: OK, so I've quickly learnt how to spot mistakes and follow up improvements to both teams (one I manage and one I work on). I'm confident taking actions and communicating on all of that. But there is the other side -> congratulation and following up on behavior/action.

Example: The current team has low velocity. They recently finished the specs and review. It didn't happen for months (always late on that), but it's their "normal" velocity. I congratulated them, but I'm wondering if I should have since they "just did their job".

How do you congratulate your coworkers? Specifically

  • Do you? Why or Why Not?
  • How?
  • On trivial/exceptional stuff?
  • When and Where?

Having Coffee with Deno - Automating All the Things

Welcome to the final installment of our Deno series, where we build a script that pairs up people for coffee.

In the last post, we added the ability to post messages into a Slack channel instead of copying from a console window.

The current major problem is that we have to remember to run the script. We could always set up a cron job or scheduled task, however, what happens when we change machines? What if our computer stops working? What if someone else changes the script, how will we remember to get the latest and run it?

Having Coffee with Deno - Sharing the News

Welcome to the third installment of our Deno series, where we build a script that pairs up people for coffee.

In the last post, we're dynamically pulling members of the Justice League from GitHub instead of a hardcoded list.

Like any good project, this approach works, but now the major problem is that we have to run the script, copy the output, and post it into our chat tool so everyone knows the schedule.

It'd be better if we could update our script to post this message instead. In this example, we're going to use Slack and their incoming webhook, but you could easily tweak this approach to work with other tools like Microsoft Teams or Discord.

The Approach

In order to for the script to post to Slack, we'll need to make the following changes:

  1. Follow these docs from Slack to create an application and enable the incoming webhooks.
  2. Test that we can post a simple message to the channel
  3. From here, we'll need to add code to make a POST call to the webhook with a message
  4. Tweak the formatting so it looks nicer

Justice League Planning

Creating the Slack Application and creating the Webhook

For this step, we'll follow the instructions in the docs, ensuring that we're hooking it up to the right channel.

After following the steps, you should see something like the following:

Image of Slack App with Incoming Webhook

We can test that things are working correctly by running the curl command provided. If the message Hello World appears in the channel, congrats, you've got the incoming webhook created!

Modifying the Script to POST to Webhook

We have the Slack app created and verified that the incoming webhook is working, so we'll need to add this integration to our script.

Since we have this incoming webhook URL and Slack recommends treating this as a secret, we'll need to add this to our .env file.

Keep your webhook ULR safe image

.env
GITHUB_API_TOKEN="<yourTokenHere>"
SLACK_WEBHOOK="<yourWebHookHere>"

With this secret added, we can write a new function, sendMessage, that'll make the POST call to Slack. Since this is a new integration, we'll add a new file, slack.ts to put it in.

slack.ts
// Using axiod for the web connection
import axiod from "https://deno.land/x/axiod@0.26.2/mod.ts";

// function to send a message to the webhook
async function sendMessage(message: string): Promise<void> {
  // Get the webhookUrl from our environment
  const webhookUrl = Deno.env.get("SLACK_WEBHOOK")!;

  try {
    // Send the POST request
    await axiod.post(webhookUrl, message, {
      headers: {
        "Content-Type": "application/json",
      },
    });
  } catch (error) {
    // Error handling
    if (error.response) {
      return Promise.reject(
        `Failed to post message: ${error.response.status}, ${error.response.statusText}`
      );
    }
    return Promise.reject(
      "Failed for non status reason " + JSON.stringify(error)
    );
  }
}

export { sendMessage };

With sendMessage done, let's update index.ts to use this new functionality.

index.ts
import { load } from "https://deno.land/std@0.195.0/dotenv/mod.ts";
import {
  GetOrganizationMemberResponse,
  getMembersOfOrganization,
} from "./github.ts";
import { sendMessage } from "./slack.ts";
import { Pair, createPairsFrom, shuffle } from "./utility.ts";

await load({ export: true });

// Replace this with your actual organization name
const organizationName = "JusticeLeague";
const names = await getMembersOfOrganization(organizationName);
const pairs = createPairsFrom(shuffle(names));
const message = createMessage(pairs);
// Slack expects the payload to be an object of text, so we're doing that here for now
await sendMessage(JSON.stringify({ text: message }));

function createMessage(pairs: Pair<GetOrganizationMemberResponse>[]): string {
  const mapper = (p: Pair<GetOrganizationMemberResponse>): string =>
    `${p.first.login} meets with ${p.second.login}${
      p.third ? ` and ${p.third.login}` : ""
    }`;
  return pairs.map(mapper).join("\n");
}

And if we were to run the above, we can see the following message get sent to Slack.

Message from Random Coffee

Nice! We could leave it here, but we could make the message prettier (having an unordered list and italicizing names), so let's work on that next.

Pretty Printing the Message

So far, we could leave the messaging as is, however; it's a bit muddled. To help it pop, let's make the following changes.

  • Italicize the names
  • Start each pair with a bullet point

Since Slack supports basic Markdown in the messages, we can use the _ for italicizing and - for the bullet points. So let's modify the createMessage function to add this formatting.

index.ts
function createMessage(pairs: Pair<GetOrganizationMemberResponse>[]): string {
  // Let's wrap each name with the '_' character
  const formatName = (s: string) => `_${s}_`;

  const mapper = (p: Pair<GetOrganizationMemberResponse>): string =>
    // and start each pair with '-'
    `- ${formatName(p.first.login)} meets with ${formatName(p.second.login)}${
      p.third ? ` and ${formatName(p.third.login)}` : ""
    }`;
  return pairs.map(mapper).join("\n");
}

By making this small change, we now see the following message:

Formatted Slack Message with italics and bullets

The messaging is better, but we're still missing some clarity. For example, what date is this for? Or what's the purpose of the message? Looking through these docs, it seems like we could add different text blocks (like titles). So let's see what this could look like.

One design approach is to encapsulate the complex logic for dealing with Slack and only expose a "common-sense" API for consumers. In this regard, I think using a Facade pattern would make sense.

We want to expose the ability to set a title and to set a message through one or more lines of text. Here's what that code would look like

slack.ts
// This class allows a user to set a title and lines and then use the
// 'build' method to create the payload to interact with Slack

class MessageFacade {
  // setting some default values
  private header: string;
  private lines: string[];
  constructor() {
    this.header = "";
    this.lines = [];
  }

  // I like making these types of classes fluent
  // so that it returns itself.
  public setTitle(title: string): MessageFacade {
    this.header = title;
    return this;
  }
  public addLineToMessage(line: string | string[]): MessageFacade {
    if (Array.isArray(line)) {
      this.lines.push(...line);
    } else {
      this.lines.push(line);
    }
    return this;
  }

  // Here's where we take the content that the user provided
  // and convert it to the JSON shape that Slack expects
  public build(): string {
    // create the header block if set, otherwise null
    const headerBlock = this.header
      ? {
          type: "header",
          text: { type: "plain_text", text: this.header, emoji: true },
        }
      : null;
    // convert each line to it's own section
    const lineBlocks = this.lines.map((line) => ({
      type: "section",
      text: { type: "mrkdwn", text: line },
    }));
    return JSON.stringify({
      // take all blocks that have a value and set it here
      blocks: [headerBlock, ...lineBlocks].filter(Boolean),
    });
  }
}

With the facade in place, let's look at implementing this in index.ts

index.ts
1
2
3
4
5
6
7
8
9
// ... code to get the pairs and formatted lines

// using the facade with the fluent syntax
const message = new MessageFacade()
  .setTitle(`☕ Random Coffee ☕`)
  .addLineToMessage(formattedPairs)
  .build();

await sendMessage(message);

When we run the script now, we get the following message:

Random Coffee Message with Header and Icons

Wrapping Up

In this post, we changed our script from posting its Random Coffee message to the console window to instead posting it into a Slack channel using an Incoming Webhook. By making this change, we were able to remove a manual step (e.g., us copying the message into the channel), and we were able to use some cool emojis and better formatting.

In the final post, we'll take this one step further by automating the script using scheduled jobs via GitHub Actions.

As always, you can find a full working version of this bot on my GitHub.

Scaling Effectiveness with Docs

As a leader, I'm always looking for ways to help my team to be more efficient. To me, an efficient team is self-sufficient, able to find the information needed to solve their problems.

I've found that having up-to-date documentation is critical for a team because it scales out knowledge in asynchronously, removing the need for manual knowledge transfers.

For example, my team has a wiki that contains information for onboarding into our space, how to complete certain processes (requesting time off, resetting a password), how to run our Agile activities, and our support guidebook. At any point, if someone on the team doesn't know how to do something, they can consult the wiki and find the necessary information.

Docs. Why Did It Have to Be Docs?

I enjoy up-to-date documentation, but the main problem with them is that they captured the state of the world when they were written, but they don't react to changes. If the process for resetting your password changes, the documentation doesn't auto-update. So unless you're spending time reviewing the docs, they'll grow stale and be worthless, or even worse, mislead others to do the wrong things.

A good mental model for documentation is to think of them as a garden. When planted, it looks great, and everyone enjoys the environment. Over time, weeds will grow, and plants will become overgrown, causing the garden to be less attractive. Eventually, people will stop visiting, and the garden will go into disrepair. To prevent this, we must take care of the garden, removing the weeds and trimming the plants.

Outdoor green space

Photo by Robin Wersich via Unsplash.com

Alright, I get it, documentation is important, but my team has commitments, so how do we carve out time to review?

Cameron Learns About Document Control

I started my career in healthcare, and one of my first jobs was writing software for a medical diagnostic device. We were ISO 9001 certified, and the device was considered a Class II from the FDA. Long story short, this meant that we have to provide documentation for our device and software and also show that we were keeping things up to date.

To comply, we would find docs that hadn't been updated in a specific time period (like 90 days) and review them. If everything checked out, we'd bump up the review date. Otherwise, we'd make the necessary changes and revalidate the document.

At the time, all of our files were in Word, so it wasn't the easiest to search them (I recall that we had Outlook reminders, but this was many moons ago).

By baking this into our process, this helped make our work more visible, which in turn, gave us a better idea of the team's capacity for that sprint.

Thankfully, we have better technology than Word for sharing information, so how can we take this approach and bring it up to the modern day?

Modern Take on an Old Classic

First, I think that having your docs in source control is a great idea. If you're using tools like Git, you already have a way of leaving comments and keeping track of approvals through pull requests.

To make the most of Git, you should keep your changes in plaintext as it's easy to see the differences. and I enjoy using Markdown and tools like Mkdocs make this workflow possible.

With this figured out, our next step is to know when the file was last reviewed. We can do that by adding a new line to the bottom of each file, Last Reviewed On: YYYY/MM/DD. To come up with the initial date, we could use the last time the file was modified (thanks git log!).

At this point, we have a way to see the last time the file was reviewed, next step is to write a script that can find files that haven't been reviewed in the last 90 days. At a high level, we'd do the following:

  1. Get the latest for the doc repository.
  2. Get all the markdown files for the repository.
  3. Get the last line of the file.
  4. If the line doesn't start with Last Reviewed On:, we flag it for review as it's never been reviewed.
  5. If the line has a date, but it's older than 90 days, we flag it for review as it might be stale.
  6. Print all flagged files to the screen.

With the script created, we could manually run this on Mondays. But we're technical, right? Why not create a scheduled task to execute this script instead? This removes a manual task to be ran and it gives us visibility on what docs need reviewed.

Wrapping Up

When scaling your knowledge out, having great documentation is necessary as it allows your team to self-serve and work in a more asynchronous manner. The main problem with documentation is that it captures the state of the world when the docs were written, but they don't automatically update when the world changes.

Therefore, we need to have some process to flag and review stale docs. To ensure it gets done, we provide visibility by creating work items and committing to them during the sprint.

Having Coffee with Deno - Dynamic Names

Welcome to the second installment of our Deno series, where we build a script that pairs up people for coffee.

In the last post, we built a script that helped the Justice League meet up for coffee.

As of right now, our script looks like the following.

index.ts
const names = [
  "Batman",
  "Superman",
  "Green Lantern",
  "Wonder Woman",
  "Static Shock", // one of my favorite DC heroes!
  "The Flash",
  "Aquaman",
  "Martian Manhunter",
];
const pairs = createPairsFrom(shuffle(names));
const message = createMessage(pairs);
console.log(message);

function shuffle<T>(items: T[]): T[] {
  const result = [...items];
  for (let i = result.length - 1; i > 0; i--) {
    const j = Math.floor(Math.random() * i);
    [result[i], result[j]] = [result[j], result[i]];
  }
  return result;
}
type Pair<T> = { first: T; second: T; third?: T };
function createPairsFrom<T>(items: T[]): Pair<T>[] {
  if (items.length < 2) {
    return [];
  }
  const results = [];
  for (let i = 0; i <= items.length - 2; i += 2) {
    const pair: Pair = { first: items[i], second: items[i + 1] };
    results.push(pair);
  }
  if (items.length % 2 === 1) {
    results[results.length - 1].third = items[items.length - 1];
  }
  return results;
}
function createMessage(pairs: Pair<string>[]): string {
  const mapper = (p: Pair<string>) =>
    `${p.first} meets with ${p.second}${p.third ? ` and ${p.third}` : ""}`;

  return pairs.map(mapper).join("\n");
}

Even though this approach works, the major problem is that every time there's a member change in the Justice League (which seems to happen more often than not), we have to go back and update the list manually.

It'd be better if we could get this list dynamically instead. Given that the League are great developers, they have their own GitHub organization. Let's work on integrating with GitHub's API to get the list of names.

The Approach

To get the list of names from GitHub, we'll need to do the following.

  1. First, we need to figure out which GitHub endpoint will give us the members of the League. This, in turn, will also tell us what permissions we need for our API scope.
  2. Now that we have a secret, we need to update our script to read from an .env file.
  3. Once we have the secret being read, we can create a function to retrieve the members of the League.
  4. Miscellaneous refactoring of the main script to handle a function returning complex types instead of strings.

Justice League Planning

Laying the Foundation

Before we start, we should reactor our current file. It works, but we have a mix of utility functions (shuffle and createPairsFrom) combined with presentation functions (createMessage). Let's go ahead and move shuffle and createPairsFrom to their own module.

utility.ts
type Pair<T> = { first: T; second: T; third?: T };

function shuffle<T>(items: T[]): T[] {
  const result = [...items];
  for (let i = result.length - 1; i > 0; i--) {
    const j = Math.floor(Math.random() * i);
    [result[i], result[j]] = [result[j], result[i]];
  }
  return result;
}

function createPairsFrom<T>(items: T[]): Pair<T>[] {
  if (items.length < 2) {
    return [];
  }
  const results: Pair<T>[] = [];
  for (let i = 0; i <= items.length - 2; i += 2) {
    const pair: Pair<T> = { first: items[i], second: items[i + 1] };
    results.push(pair);
  }
  if (items.length % 2 === 1) {
    results[results.length - 1].third = items[items.length - 1];
  }
  return results;
}

export { createPairsFrom, shuffle };
export type { Pair };

With these changes, we can update index.ts to be:

index.ts
import { Pair, createPairsFrom, shuffle } from "./module.ts";

const names = [
  "Batman",
  "Superman",
  "Green Lantern",
  "Wonder Woman",
  "Static Shock", // one of my favorite DC heroes!
  "The Flash",
  "Aquaman",
  "Martian Manhunter",
];
const pairs = createPairsFrom(shuffle(names));
const message = createMessage(pairs);
console.log(message);

function createMessage(pairs: Pair<string>[]): string {
  const mapper = (p: Pair<string>) =>
    `${p.first} meets with ${p.second}${p.third ? ` and ${p.third}` : ""}`;

  return pairs.map(mapper).join("\n");
}

Getting GitHub

Now that our code is more tidy, we can focus on figuring out which GitHub endpoint(s) to use to figure out the members of the Justice League.

Taking a look at the docs, we see that there are two different options.

  1. Get members of an Organization
  2. Get members of a Team

What's the difference between the two? In GitHub parlance, an Organization is an overarching entity that consists of members which, in turn, can be part of multiple teams.

Using the Justice League as an example, it's an organization that contains Batman, and Batman can be part of the Justice League Founding Team and a member of the Batfamily Team.

Since we want to pair everyone up in the Justice League, we'll use the Get members of an Organization approach.

Working with Secrets

To interact with the endpoint, we'll need to create an API token for GitHub. Looking over the docs, our token needs to have the read:org scope. We can create this token by following the instructions here about creating a Personal Auth Token (PAT).

Once we have the token, we can invoke the endpoint with cURL or Postman to verify that we can communicate with the endpoint correctly.

After we've verified, we'll need a way to get this API token into our script. Given that this is sensitive information, we absolutely should NOT check this into the source code.

Creating an ENV File

A common way of dealing with that is to use an .env file which doesn't get checked in, but our application can use it during runtime to get secrets.

Let's go ahead and create the .env file and put our API token here.

.env
GITHUB_BEARER_TOKEN="INSERT_TOKEN_HERE"

Our problem now is that if we check git status, we'll see this file listed as a change. We don't want to check this in, so let's add a .gitignore file.

Adding a .gitignore File

With the .env file created, we need to create a .gitignore file, which tells Git not to check in certain files.

Let's go ahead and add the file. You can enter the below, or you can use the Node gitignore file (found here)

.gitignore
.env # ignores all .env files in the root directory

We can validate that we've done this correctly if we run git status and don't see .env showing up anymore as a changed file.

Loading Our Env File

Now that we have the file created, we need to make sure that this file loads at runtime.

In our index.ts file, let's make the following changes.

index.ts
1
2
3
4
5
6
7
8
9
import { config as loadEnv } from "https://deno.land/x/dotenv@v3.2.2/mod.ts";
// other imports

// This loads the .env file and adds them to the environment variable list
await loadEnv({ export: true });
// Deno.env.get("name") retrieves the value from an environment variable named "name"
console.log(Deno.env.get("GITHUB_BEARER_TOKEN"));

// remaining code

When we run the script now with deno run, we get an interesting prompt:

Deno requests read access to ".env".
- Requested by `Deno.readFileSync()` API.
- Run again with --allow-read to bypass this prompt
- Allow?

This is one of the coolest parts about Deno; it has a security system that prevents scripts from doing something that you hadn't intended through its Permissions framework.

For example, if you weren't expecting your script to read from the env file, it'll prompt you to accept. Since packages can be taken over and updated to do nefarious things, this is a terrific idea.

The permissions can be tuned (e.g., you're only allowed to read from the .env file), or you can give blanket permissions. In our cases, two resources are being used (the ability to read the .env file and the ability to read the GITHUB_BEARER_TOKEN environment variable).

Let's run the command with the allow-read and allow-env flags.

deno run --allow-run --allow-env ./index.ts

If the bearer token gets printed, we've got the .env file created correctly and can proceed to the next step.

Let's Get Dynamic

Now that we have the bearer token, we can work on calling the GitHub Organization endpoint to retrieve the members.

Since this is GitHub related, we should create a new file, github.ts, to host our functions and types.

Adding axiod

In the github.ts file, we're going to be use axiod for communication. It's similar to axios in Node and is better than then the built-in fetch API.

Let's go ahead and bring in the import.

github.ts
import axiod from "https://deno.land/x/axiod@0.26.2/mod.ts";

Calling the Organization Endpoint

With axiod pulled in, let's write the function to interact with the GitHub API.

github.ts
// Brining in the axiod library
import axiod from "https://deno.land/x/axiod@0.26.2/mod.ts";

async function getMembersOfOrganization(orgName: string): Promise<any[]> {
  const url = `https://api.github.com/orgs/${orgName}/members`;
  // Necessary headers are found on the API docs
  const headers = {
    Accept: "application/vnd.github+json",
    Authorization: `Bearer ${Deno.env.get("GITHUB_BEARER_TOKEN")}`,
    "X-GitHub-Api-Version": "2022-11-28",
  };

  try {
    const resp = await axiod.get<any[]>(url, {
      headers: headers,
    });
    return resp.data;
  } catch (error) {
    // Response was received, but non-2xx status code
    if (error.response) {
      return Promise.reject(
        `Failed to get members: ${error.response.status}, ${error.response.statusText}`
      );
    } else {
      // Response wasn't received
      return Promise.reject(
        "Failed for non status reason " + JSON.stringify(error)
      );
    }
  }
}

To prove this is working, we can call this function in the index.ts file and verify that we're getting a response.

index.ts
1
2
3
4
5
6
7
8
9
import { config as loadEnv } from "https://deno.land/x/dotenv@v3.2.2/mod.ts";
import { getMembersOfOrganization } from "./github.ts";
import { Pair, createPairsFrom, shuffle } from "./utility.ts";

await loadEnv({ export: true });

const membersOfOrganization = await getMembersOfOrganization("JusticeLeague");
console.log(JSON.stringify(membersOfOrganization));
// rest of the file

Now let's rerun the script.

deno run --allow-read --allow-env ./main.ts
Deno requests net access to "api.github.com"
- Requested by `fetch` API.
- Run again with --allow-net to bypass this prompt.

Ah! Our script is now doing something new (making network calls), so we'll need to allow that permission by using the --allow-net flag.

deno run --allow-read --allow-env --allow-net ./main.ts

If everything has worked, you should see a bunch of JSON printed to the screen. Success!

Creating the Response Type

At this point, we're making the call, but we're using a pesky any for the response, which works, but it doesn't help us with what properties we have to work with.

Looking at the response schema, it seems the main field we need is login. So let's go ahead and create a type that includes that field.

github.ts
type GetOrganizationMemberResponse = {
  login: string;
};

async function getMembersOfOrganization(
  orgName: string
): Promise<GetOrganizationMemberResponse[]> {
  //code
  const resp = await axiod.get<GetOrganizationMemberResponse[]>(url, {
    headers: headers,
  });
  // rest of the code
}

We can rerun our code and verify that everything is still working, but now with better type safety.

Cleaning Up

Now that we have this function written, we can work to integrate it with our index.ts script.

index.ts
import { config as loadEnv } from "https://deno.land/x/dotenv@v3.2.2/mod.ts";
import { getMembersOfOrganization } from "./github.ts";
import { Pair, createPairsFrom, shuffle } from "./utility.ts";

await loadEnv({ export: true });

const names = await getMembersOfOrganization("JusticeLeague");
const pairs = createPairsFrom(shuffle(names));
const message = createMessage(pairs);
console.log(message);

So far, so good. The only change we had to make was to replace the hardcoded array of names with the call to getMembersOfOrganization.

Not an issue, right?

Hmmm, what's up with this? createMessage has a type error

It looks like createMessage is expecting Pair<string>[], but is receiving Pair<GetOrganizationMemberResponse>[].

To solve this problem, we'll modify createMessage to work with GetOrganizationMemberResponse.

index.ts
// Need to update the input to be Pair<GetOrganizationMemberResponse>
function createMessage(pairs: Pair<GetOrganizationMemberResponse>[]): string {
  // Need to update mapper function to get the login property
  const mapper = (p: Pair<string>): string =>
    `${p.first.login} meets with ${p.second.login}${
      p.third ? ` and ${p.third.login}` : ""
    }`;

  return pairs.map(mapper).join("\n");
}

With this last change, we run the script and verify that we're getting the correct output, huzzah!

Current Status

Congratulations! We now have a script that is dynamically pulling in heroes from the Justice League organization instead of always needing to see if Green Lantern is somewhere else or if another member of Flash's SpeedForce is here for the moment.

A working version of the code can be found on GitHub.

Coaching Corner Volume 3

Welcome to Cameron's Coaching Corner, where we answer questions from readers about leadership, career, and software engineering.

In this week's post, we look at how Alan can help their engineer figure out what they want to be when they grow up.

Hey Cameron!

I have a front-end engineer who's sharp, but they're not sure what their career growth looks like. I get the sense that they're interested in other roles outside of software development. How do you navigate this and help them grow?

Having Coffee with Deno - Inspiration

In a previous post, I mention my strategy of building relationships through one-on-ones. One approach in the post was leveraging a Slack plugin, Random Coffee, to automate scheduling these impromptu conversations.

Dinosaur sitting in a coffee cup

I wanted to leverage the same idea at my current company; however, we don't use Slack, so I can't just use that bot.

High-Level Breakdown

Thinking more about it, the system wouldn't be too complicated as it has three moving parts:

  • Get list of people
  • Create random pairs
  • Post message

To make it even easier, I could hardcode the list of people, and instead of posting the message to our message application, I could print it to the screen.

With these two choices made, I would need to build something that can shuffle a list and create pairs.

Technology Choices

Even though we're hardcoding the list of names and printing a message to the screen, I know that the future state is to get the list of names dynamically, most likely through an API call. In addition, most messaging systems support using webhooks to create a message, so that would be the future state as well.

With these restrictions in mind, I know I need to use a language that is good at making HTTP calls. I also want this automation to be something that other people outside of me can maintain, so if I can use a language that we typically use, that makes this more approachable.

In my case, TypeScript fit the bill as we heavily use it in my space, the docs are solid, and it's straightforward to make HTTP calls. I'm also a fan of functional programming, which TypeScript supports nicely.

My major hurdle at this point is that I'd like to execute this single file of TypeScript, and the only way I knew how to do that was by spinning up a Node application and using something like ts-node to execute the file.

Talking to a colleague, they recommended I check out Deno as a possible solution. The more I learned about it, the more I thought this would fit perfectly. It supports TypeScript out of the box (no configuration needed), and a file can be ran with deno run, no other tools needed.

This project is simple enough that if Deno wasn't a good fit, I could always go back to Node.

With this figured out, we're going to create a Deno application using TypeScript as our language of choice.

Getting Started With Deno

  1. Install Deno via these instructions
  2. Setup your dev environment - I use VS Code, so adding the recommended extension was all I needed.

Trying Deno Out

Once Deno has been installed and configured, you can run the following script and verify everything is working correctly. It creates a new directory called deno-coffee, writes a new file and executes it via deno.

1
2
3
4
mkdir deno-coffee
cd deno-coffee
echo 'console.log("Hello World");' >> coffee.ts
deno run coffee.ts

We've got something working, so let's start building out the random coffee script.

Let's Get Percolating

As mentioned before, we're going to hardcode a list of names and print to the screen, so let's build out the rough shape of the script:

const names = [
  "Batman",
  "Superman",
  "Green Lantern",
  "Wonder Woman",
  "Static Shock", // one of my favorite DC heroes!
  "The Flash",
  "Aquaman",
  "Martian Manhunter",
];
const pairs = createPairsFrom(shuffle(names));
const message = createMessage(pairs);
console.log(message);

This code won't compile as we haven't defined what shuffle, createPairsFrom, or createMessage does, but we can tackle these one at a time.

Let's Get Random

Since we don't want the same people meeting up every time, we need a way to shuffle the list of names. We could import a library to do this, but what's the fun in that?

In this case, we're going to implement the Fisher-Yates Shuffle (sounds like a dance move).

function shuffle(items: string[]): string[] {
  // create a copy so we don't mutate the original
  const result = [...items];
  for (let i = result.length - 1; i > 0; i--) {
    // create an integer between 0 and i
    const j = Math.floor(Math.random() * i);
    // short-hand for swapping two elements around
    [result[i], result[j]] = [result[j], result[i]];
  }
  return result;
}

const words = ["apples", "bananas", "cantaloupes"];
console.log(shuffle(words)); // [ "bananas", "cantaloupes", "apples" ]

Excellent, we have a way to shuffle. One refactor we can make is to have shuffle be generic as we don't care what array element types are, as long as we have an array.

Making this refactor gives us the following:

1
2
3
4
5
6
7
8
function shuffle<T>(items: T[]): T[] {
  const result = [...items];
  for (let i = result.length - 1; i > 0; i--) {
    const j = Math.floor(Math.random() * i);
    [result[i], result[j]] = [result[j], result[i]];
  }
  return result;
}

Now, we can shuffle an array of anything. Nice!

Two of a Kind

Let's take a look at the next function, createPairsFrom. We know its type signature is going fromstring[] to something, but what?

In the ideal world, our total list of names is even, so we always have equal pairs.

1
2
3
4
{first: 'Batman', second: 'Superman'},
{first: 'Green Lantern', second: 'Wonder Woman'},
{first: 'Static Shock', second: 'The Flash'},
{first: 'Aquaman', second: 'Martian Manhunter'}

But what happens if Martian Manhunter is called away and isn't available? That would leave Aquaman without a pair to have coffee with (sad trombone noise).

In the case that we have an odd number of heroes, the last pair should instead be a triple which would look like the following:

1
2
3
{first: 'Batman', second: 'Superman'},
{first: 'Green Lantern', second: 'Wonder Woman'},
{first: 'Static Shock', second: 'The Flash', third: 'Martian Manhunter'}

Given that we've been using the word Pair to represent this grouping, we have a domain term we can use. This also means that createPairsFrom has the following type signature.

1
2
3
function createPairsFrom(names: string[]): Pair[] {
  // logic
}

But what does the Pair type look like? We can model either using an optional property or by using a discriminated union.

// Using an optional property
type Pair {
  first: string,
  second: string,
  third?: string
}

// Using Discriminated Unions
type Pair = {kind: 'double', first:string, second: string}
          | {kind: 'triple', first:string, second: string; third: string}

For now, I'm thinking of going with the optional property and if we need to tweak it later, we can.

Let's go ahead and implement createPairsFrom.

function createPairsFrom(names: string[]): Pair[] {
  // if we don't have at least two names, then there are no pairs
  if (names.length < 2) {
    return [];
  }
  const results = [];
  for (let i = 0; i <= names.length - 2; i += 2) {
    const pair: Pair = { first: names[i], second: names[i + 1] };
    results.push(pair);
  }
  if (names.length % 2 === 1) {
    // we have an odd length
    // Assign the left-over name to the third of the triple
    results[results.length - 1].third = names[names.length - 1];
  }
  return results;
}

// Example execution
console.log(createPairsFrom(["apples", "bananas", "cantaloupes", "dates"])); // [{first:"apples", second:"bananas"}, {first:"cantaloupes", second:"dates"}]
console.log(createPairsFrom(["ants", "birds", "cats"])); // [{first:"ants", second:"birds", third:"cats"}]

Similarly to shuffle, we can make this function generic as it doesn't matter what the array element types are, as long as we have an array to work with.

Refactoring to generics gives us the following:

1
2
3
4
5
type Pair<T> = { first: T; second: T; third?: T };

function createPairsFrom<T>(items: T[]): Pair<T>[] {
  // same function as before
}

To the Presses!

For the last part, we need to implement createMessage. We know it has to have the following type signature:

function createMessage(pairs: Pair<string>[]): string {}

We know the following rules.

  • When it's a double, we want the message to say, X meets with Y
  • When it's a triple, we want the message to say, X meets with Y and Z

Based on this, we need a way to map from Pair to the above string. So let's write that logic.

1
2
3
4
5
6
function createMessage(pairs: Pair<string>[]): string {
  const mapper = (p: Pair<string>) =>
    `${p.first} meets with ${p.second}${p.third ? ` and ${p.third}` : ""}`;

  pairs.map(mapper);
}

From here, we can join the strings together using the \n (newline) character.

1
2
3
4
5
6
function createMessage(pairs: Pair<string>[]): string {
  const mapper = (p: Pair<string>) =>
    `${p.first} meets with ${p.second}${p.third ? ` and ${p.third}` : ""}`;

  return pairs.map(mapper).join("\n");
}

All Coming Together

With the implementation of createMessage, we can execute our script by running deno run coffee.ts

1
2
3
4
5
6
deno run coffee.ts

"Superman meets with Wonder Woman
Batman meets with The Flash
Martian Manhunter meets with Aquaman
Static Shock meets with Green Lantern"

From here, we have a working proof of concept of our idea. We could run this manually on Mondays and then post this to our messaging channel (though you might want to switch the names out). If you wanted to be super fancy, you could have this scheduled as a cron job or through Windows Task Scheduler.

The main takeaway is that we've built something we didn't have before and can continue to refine and improve. If no one likes the idea, guess what? We only had a little time invested. If it takes off, then that's great; we can spend more time making it better.

Wrapping Up

In this post, we built the first version of our Random Coffee script using TypeScript and Deno. We focused on getting our tooling working and building out the business rules for shuffling and creating the pairs.

In the next post, we'll look at making this script smarter by having it retrieve a list of names dynamically from GitHub's API!

As always, you can find a full working version of this bot on my GitHub.

Cameron's Coaching Corner Volume 2

Welcome to Cameron's Coaching Corner, where we answer questions from readers about leadership, career, and software engineering.

In this week's post, we look at how Chase can balance writing the perfect code and shipping something.

My question: As a young developer, I notice that sometimes I get paralyzed by options. I want to write the perfect piece of code. This helps me in writing good code but usually at the cost of efficiency. Especially when I am faced with multiple good options. Sometimes I want to KNOW I’m gonna write the right thing before I’m writing it when I my be better off with some trial and error

  1. Are these common problems that you see people face?
  2. What rules of thumb or other pieces of advice do you have to avoid writing nothing instead of something as a result of seeking the ideal?
  3. How important is planning vs trial and error ("failing fast" as they say) to good software development flow?

Five Minutes at Five Guys - When Metrics Conflict with UX

In a recent post, I spoke about the flaw of using a single metric to tell the story and how Goodhart's Law tells us that once we start measuring a metric, it stops being a useful metric.

Let's look at a real-world example with the popular fast food chain, Five Guys.

All I Wanted Was a Burger

Five Guys is known for making good burgers and delivering a mountain of piping hot fries as part of your order. Seriously, an order of small fries is a mountain of spuds. Five Guys make their fries to order, so they're not sitting around under a heat lamp.

Yo Dawg, I heard you wanted fries, so I put fries in your fries

This approach works great when ordering in person, but what happens if you order online? The process is essentially the same, the crew works on the burgers, but they won't start the fries until you're at the restaurant, so they're always guaranteeing that you get fresh made fries.

At this point, it's clear that receiving a mountain of hot, cooked-to-order fries is part of the experience and what customers expect, right?

Cameron's Coaching Corner - Mentoring an Intern

Welcome to Cameron's Coaching Corner, where we answer questions from readers about leadership, career, and software engineering.

In this week's post, we look at how test123 can improve the mentoring experience for their new intern.

I recently had an intern join my time and I’m going to be his mentor. I’ve had interns in the past, but this one doesn’t understand any fundamentals and struggles with everything.

My question to you is this, how can I help him? He doesn’t know HTML/CSS/JS, so I’m trying to teach him those, but it’s taking away a lot of time. I suggested for him to watch some videos and then we can sync twice a day to go over the topics and discuss them further.

My issue: I don’t want to just say “go watch videos.” Bc, that’s not the best way to learn - I want him to dive into the code and try things and break, that’s how I learned at least.

How do you think I should handle this? I wanna be a good mentor and I want him to learn and grow. I don’t wanna fail the kid bc I don’t know the proper way to mentor.

The Humble Function - Foundation to Functional Programming

When learning about functional programming, you won't go far before you run into the concept of a function. But we're not talking about some syntax or keywords in the language, but from a mathematical sense.

A function is a mapping between two sets such that for every element in the first set, it's mapped to a single element in the second set.

Since a set is a collection of elements, this is similar to a type where values are part of that type. With this understanding, we can modify our definition of a function to be:

A function is a mapping between two types such that for every value in the first type, it's mapped to a single value in the second type.

So What Does a Function Look Like?

Before diving into code, let's build up our understanding of functions more. When drawing out the function, we can model it like this.

Generic mapping from A to B

Generic mapping from A to B

The first type (A) is a circle where each possible value is listed, using ... to denote a pattern. The arrows map from each value of A to a value in B.

With this example, we know we have a function if the mapping satisfies the following rule:

Every element on the left maps to a single element on the right.

This rule seems easy enough to follow, but let's look at a mapping where this rule doesn't hold.

Functional Heartbreak

Let's say that we needed to write some code that could take a given month and return the number of days it has. Given this, here's what the mapping would look like.

Days in Month Mapping

Mapping from month name to days in month

To check if we have a function, we need to see if there's any element on the left with two or more arrows coming out.

In this case, February is breaking our rule because it could map to 28 or 29, depending on if it's a leap year. Since there isn't a parameter to denote if it's a leap year, our mapping isn't consistent and can't be a function.

One way to fix this would be to change our type on the left from MonthName to MonthName and year. Making this change gives us this new mapping.

Days in Month Mapping with Month Name and Year

Month and year mapping to days in month

Hip to Be Square

Let's look at a mapping that is a function, the square function.

Square mapping from number to number

Square mapping from number to number

Does every value on the left map to a single value on the right?

Yep, every value does. In fact, there are some values on the left that map to the same value on the right, which isn't a problem.

If we wanted to, we could restrict the type on the right from number to non-negative number, but there's no harm in having it be wider than needed.

Kinds of Functions

With this understanding of functions, let's talk about the two kinds of functions we can write and how they interact with each other.

Pure Functions

First, we have the pure function. These functions depend wholly on their inputs and they do not interact with outside state. For example, pure functions won't interact with databases, file systems, random generation, or time.

Pure functions are great because they're easy to test, composed with other functions, and don't modify state. Another advantage is that pure functions can be replaced with the result of their call (in other words, you could replace a call to square(3) with its result, 9 and the program is the same). This is known as referential transparency and is a great tool for troubleshooting an application.

The main downside to pure functions is that since they don't talk to other systems (including input/output), it's impossible to write a useful program with just pure functions.

Impure Functions

Impure functions, on the other hand, focus on interacting with outside state. These functions will call to the database or other systems, get the time, and generate random data as needed.

They allow us to write useful programs because they interact with input/output, however, the trade off is that they can be harder to test, they don't compose, and since they modify state, you may not be able to run them multiple times and get the same result.

One way to identify an impure function is by looking at its type signatures. For example, a function that takes inputs but returns void has to be modifying state or talking to another system, otherwise, why would you call it? Another signature is a function that takes no inputs, but it can return a value (like readLine() from nodejs), where did it get the value from? Just short of returning a constant value, it had to get it from somewhere.

Building an Application

Building an application requires both pure and impure functions, but how do we leverage the best of both worlds?

When I think about software, I think about how data flows through the system. For example, if someone is using a console application, they're inputting their commands to the terminal, which in turn converts them to commands to run, where the output is displayed on the screen.

As such, an application is made of three kinds of functions.

  • Boundary functions - These are responsible for getting input/output. They should have zero business rules (or the least possible as they are impure functions.
  • Business functions - These are the business specific rules that need to be ran on the given inputs. As these are typically the most important of an application, they are designed as pure functions
  • Workflow functions - This is the combination of boundary components and business components to build something useful. Since they're using impure functions, this will also be impure, but I often use this as my composition root for the application.

Effervescent Applications with Fizz Buzz

To demonstrate, let's build a version of FizzBuzz that uses this mindset.

For the problem, we have the following requirements.

  • If the number is divisible by 3, print "Fizz".
  • If the number is divisible by 5, print "Buzz".
  • If the number is divisible by both 3 and 5, print "FizzBuzz".
  • If the number isn't divisible by any of these, then print the number.

Given that we're building a console application, we will need to support getting input via the console and printing to the console.

Let's go ahead and build up our boundary functions.

// Impure function that allows us to get number from user
function getInput(): number {
  // using prompt-sync https://github.com/heapwolf/prompt-sync
  const prompt = require("prompt-sync")({ sigint: true });
  const response = prompt("What number to calculate FizzBuzz to?");
  if (!+response || +response < 1) {
    console.log(
      "Invalid response, please enter a positive number greater than 1"
    );
    return getInput();
  }
  return +response;
}

// Function that wraps console.log for printing
function printOutput(content: string): void {
  console.log(content);
}

At this point, we have a way of getting a number via getInput and a way to print a string via printOutput. In printOutput, this is a tiny function with no business rules whatsoever. getInput, however, has some business rules about validation, but we'll see later on how to refactor this.

For now, let's leave these two and look into creating our business rule functions.

// Business rules for FizzBuzz
function calculateFizzBuzz(input: number): string {
  if (input % 3 == 0 && input % 5 == 0) {
    return "FizzBuzz";
  }
  if (input % 3 == 0) {
    return "Fizz";
  }
  if (input % 5 == 0) {
    return "Buzz";
  }
  return `${input}`;
}

// Helper function to create a range of numbers from [1..end]
function createRangeFromOneTo(end: number): number[] {
  if (number < 1) {
    return [];
  }
  return Array.from(Array[number].keys()).map((x) => x + 1);
}

With calculateFizzBuzz defined, we could write unit tests to ensure the correctness of the behavior. We could also create a mapping to double-check that we have a function.

Now, let's revisit our getInput function. We've got some business rules that deal with validation (e.g. the input must be a number and greater than 1). Given that this is a light business rule, we could leave it here; however, testing this becomes harder because we don't have a way to ensure that the validation works as expected.

To solve this problem, we could extract the validation logic to its own pure function and update getInput to use the new function.

function isInputValid(input: string): boolean {
  if (!+input) {
    return false;
  }
  return +input > 1;
}

function getInput(): number {
  // using prompt-sync https://github.com/heapwolf/prompt-sync
  const prompt = require("prompt-sync")({ sigint: true });
  const response = prompt("What number to calculate FizzBuzz to?");
  if (!isInputValid(response)) {
    console.log(
      "Invalid response, please enter a positive number greater than 1"
    );
    return getInput();
  }
  return +response;
}

Nice! With this in place, we can go ahead and implement our last function, the FizzBuzz workflow.

function runFizzBuzzWorkflow(): void {
  // Data coming in
  const maximumNumber = getInput();

  // Calculating results
  const results = createRangeFromOneTo(maximumNumber).map((x) =>
    calculateFizzBuzz(x)
  );

  // Print Results
  results.forEach((x) => printOutput(x));
}

// example invocation
runFizzBuzzWorkflow();

This is a straightforward implementation as we get the maximumNumber to calculate, create an array of numbers from 1 to maximumNumber, map each of those to their FizzBuzz representation, and then print them to the screen.

Let's go one step forward. In our example, we assumed that the input and output was coming from the console, but what if we needed to change to read and write to a file?

We could move the boundary functions to be parameters to runFizzBuzzWorkflow, let's take a look at what that would give us.

function runFizzBuzzWorkflow(
  readInput: () => number,
  writeOutput: (string) => void
) {
  // Data coming in
  const maximumNumber = readInput();

  // Calculating results
  const results = createRangeFromOneTo(maximumNumber).map((x) =>
    calculateFizzBuzz(x)
  );

  // Print Results
  results.forEach((x) => writeOutput(x));
}

// example invocations
runFizzBuzzWorkflow(getInput, printOutput); // using console read/write
runFizzBuzzWorkflow(() => 42, printOutput); // using hardcoded input with console log

With this change, we can now swap out how we can input or output by creating new functions with the right type signatures. This makes testing workflow components easy because you can stub in your own mocks (no need for mocking frameworks).

If you understand the power of switching out your boundaries, then you also understand other modern architectures like Ports and Adapters as they follow a similar principle.

Wrapping Up

In this post, we looked at what a function is, how it relates to types, and how to tell if a mapping is a function. From there, we covered the differences between pure and impure functions and how you need both to build any useful application. Finally, we wrapped up by implementing the FizzBuzz problem using this approach.

Five Tips to Improve Your Coaching Conversations

As a leader, you're responsible for coaching and growing your team, helping them be successful. To do this, you need to set the tone and example of the behaviors you want the team to have.

No matter how good of a team you have or how good of a leader you are, you will have to have a conversation about performance. Whether it's delivery, professional skills, or technology skills, you will have a moment where you need someone to change their behaviors.

Having these types of conversations can be scary, no matter how much experience in leadership you might have. However, when done correctly, these moments can greatly impact the other person, helping them grow tremendously.

On the other hand, poor coaching will do the absolute opposite. The other person can become confused or angry. They could even shut down and disengage altogether, making coaching them that much harder.

So what does good coaching look like? I can't guarantee that these steps will solve all your woes; however, I do guarantee that following these tips will increase the odds of the other person listening and at least consider your feedback. Remember, you want to do this coaching because some behavior has caught your attention, and you want to correct it. If the other person doesn't listen or want to engage, then you literally can't make this happen.

Step 1: Be Timely

The sooner you can have this conversation, the more effective it will be. Remember, the point of feedback is to let the other person know how they're doing and correct if needed. They can't do this if the behavior happened three weeks ago because there's no correlation at that point.

Imagine if you had a test suite that only told you about failing tests a week after the build started. There's no way you could make the right decisions, so why would we think that's the case for behavior?

If I've noticed a pattern and feel that it's time to coach, I will get that feedback to them that week, if not the next day.

Step 2: Be Specific

When giving this feedback, the behavior may be obvious to you but not even a thought for the other person. Because we can't control what the other person is thinking, we need to set the context for the feedback so they know what you're talking about.

Let's take my son, for example. He's particular about his food, so when he says that "dinner was awesome," that makes me feel great as I'm happy he enjoyed dinner. But I have no clue what he actually liked or why he thought that. Was it the food? The way it was served? The fact that we had a picnic? No idea, so I'd respond with, "What made it stand out to you?". When he mentions that he liked the pizza, I go "Ah! He enjoyed the food, nice!"

Providing this specific context is crucial for the other person because it lets them kow what caught your attention and drastically reduces the confusion in the conversation. For those who like more concrete details, sharing links to chats, emails, or other artifacts with the behavior can be helpful because you can use it as the foundation for the conversation.

Step 3: Explain Why

There's a reason that you're having this conversation. There's something that's important to you, and, in your opinion, it wasn't important to the other person. We've got to explain why it's important and why you're commenting on it.

I never want to remove someone's autonomy as I like to set the direction and let the team blaze a path, with me guiding to make sure we don't get lost in the wilderness. However, for someone to have autonomy, they need to understand the goals and the reasoning behind it.

If they don't have this knowledge, then it's that much harder for them to make the right decisions. Ensuring they know the why is a leader's responsibility.

Step 4: Seek To Understand

You're working with a team of professionals. A professional makes the right decisions based on their knowledge and experience. If the person is making mistakes, we need to understand why they made the choice that they did.

For example, let's say I'm coaching someone who's consistently missing meetings. I'm frustrated that they're unresponsive and that they don't care. The issue here is that it's okay for me to feel frustrated, but I can't make the judgment that they don't care. I don't know that, and it causes more problems than it solves. I won't vent my feelings to the other person because even though it'd make me feel better, it doesn't help the situation.

A better approach would be to understand why they're missing meetings. Is it something outside of work? Could it be that they don't know why they need to attend? What if they didn't receive an invite? In any of the above cases, there was a solid reason why they didn't attend, and I wouldn't have known that if I had not opened the conversation.

Don't assume malice or apathy when something happens. We are humans first, which means we're going to make mistakes.

Step 5: Working Together

The entire point of coaching is to help the person improve, and we also don't want to take away their autonomy. To make this happen, we need to work with the other person to come up with ideas that can help improve the situation. It's not any one person's responsibility, but it's your responsibility to brainstorm with them and help guide them down the correct path.

The key here is to have an open mind and really consider all ideas. One of my favorite leadership books, First, Break All The Rules, talks about how great leaders work with their people to have their strengths shine and to make their weaknesses a non-issue.

In the missing meeting example, I found out that the issue was that they didn't know why they needed to attend the meeting, so they didn't attend, in order to focus on their development work. Working together, I changed invites to include the reason for attending and encouraged them to chat with me so we could figure it out if they didn't know why they needed to be there.

Case Study - Bringing It Together

In this example, let's explore where we would need to do some coaching.

While reviewing a pull request from Bruce, you see a comment from Alvin, a member of your team, where they were particularly critical of the work. Reading through the pull request, you see Alvin has left more harsh comments about Bruce's work.

Talking with Bruce, they mention that they don't work well with Alvin as it seems like he's always critical of Bruce.

Based on this scenario, we know that Alvin has left some harsh words for Bruce, which makes them less likely to work together. If Alvin keeps this behavior up with other people, this will impact others wanting to work with him, reducing his effectiveness.

After collecting your thoughts, you reach out to Alvin to see if he's got a few minutes to chat about Bruce's pull request.

"Hey Alvin, I noticed you left some pretty harsh comments that in Bruce's pull request. For example, saying that 'this code is convoluted, rewrite it'. Even if that was the case, it's not clear why you think that. I'm more concerned with how the messaging came across because we work with others to accomplish our tasks, and that communication style can make people not want to work with us.

I don't believe you intend to alienate others, so can you walk me through your thought process here and why you thought this was the right approach?"

In this example, we've already hit four out of the five tips. Our feedback was timely and specific to the problem. We included why it caught our attention and started with an open-ended question for the conversation about the behavior.

In the follow-up conversation, Alvin mentions that he was having a rough day, particularly outside of work, and that he wasn't entirely focused on his tone. Given that this is the first time Alvin has done this, we want to focus on fixing the issue before it becomes a pattern.

"I understand that it can be hard to focus on your tone when you're having a rough time, however, we can't speak to others this way. I don't want this to become a pattern, so what are some things that we could do instead when we're not in the right mental head space for code reviews?"

At this point, we've acknowledged what was said and reaffirmed expectations. Using another open-ended question, we can start brainstorming things that we could do to help improve Alvin's tone. Since we're opening the conversation, Alvin is also giving feedback on what might work for him and what wouldn't work.

Wrapping Up

Giving critical feedback to someone is not the easiest thing to do, however, it can have the most impact for them. To help frame the conversation, our coaching should:

  • Be Timely
  • Be Specific
  • Explain the Why
  • Seeking to Understand
  • Be Collaborative

Building Relationships Through One-on-Ones

As a leader, one of your goals is to build a strong, high-performing team. To do this, you'll need to establish and develop relationships within the team and with each other. One approach I've found helpful to build and sustain these relations is through one-on-ones.

When most people think of one-on-ones, they typically think of some scheduled time, every so often, where they talk about work concerns or whatever is on the leader's mind. Some might find one-on-ones a waste of time and skip them.

Let's face it, we've all had bad one-on-ones where the conversation was forced and stiff. Or, it felt like the other person wasn't listening or cared about what was being talked about. If enough one-on-ones go down this route, it's no surprise that people don't want to have these conversations.

So, how do we improve the situation? In this post, I'm going to show you three things you can do to improve the one-on-ones you're having with the team by making sure that they're being heard, relationships are being built, and, just maybe, you might even get to know them better.

It's Not a Status Update

A common mistake is treating one-on-ones as status updates for work or projects. Even though it might be tempting to get an update (especially on important projects), remember this time is for the other person to talk with you about what's on their mind. They can't do this if you're asking for updates on work. If you're leveraging stand-ups or a task board, then you should be able to get updates from there. If this isn't sufficient, ask for updates outside this conversation. The one-on-one is where you give the other person your full attention.

Despite your best intent, this can be a tough habit to break. To help get out of this mindset, start time-boxing the updates to be a set period of time (e.g., ten minutes) with the goal of making this period shorter in subsequent one-on-ones.

Another technique I use is resetting, where I remind my teammate of the purpose and goal of the conversation. Doing this, it's a gentle nudge in the correct direction and reinforces that this time is for them, not for whatever is on my mind.

Don't Get Distracted

In the world of working remotely, it's easy to get distracted by chat notifications or emails. You're on the call, and you see the notification at the bottom of your screen, and before you realize it, you've read it and already thinking about a response. This may be great for you getting things done; however, for the other person, it's clear that you stopped paying attention. So what's important? Is it the notification or the other person?

pedestrians looking distracted in downtown
Credit to Matt Quinn via Unsplash

To help reinforce the focus, I put myself in Do Not Disturb mode, which will hide notifications from me until the meeting ends. If the issue is truly important, then someone would give me a call, in which case, I can excuse myself from the one-on-one to see what's up. In this rare case, I will reschedule our conversation as it's essential that we meet.

Allow Them to Set the Agenda

Another mistake I see leaders make is that they'll come to the one-on-one with a list of topics they want to discuss for the day. This can be particularly true if you're coaching this person and you want to provide concrete feedback. However, remember, the goal is to build and strengthen the relationship, and you can't do that if you're always setting the agenda for the conversations.

I find it amazing that you can learn quite a bit about the other person based on what they bring up. For example, if they're speaking about a conflict with another person, that lets me know that they're aware of relationships and how they're being perceived. On the other hand, if they're talking about a project and the concerns they have, then they're thinking outside of a task and are thinking at a higher level.

In one case, I was in a one-on-one with an engineer where they brought up their concern about supporting an API as they didn't know much about it. My first impression was that they didn't know how to read the code, but digging in, it turned out that they didn't see how the API fit into the bigger picture of the system. Learning this was a great fact check because I thought it was a technical issue, but in reality, it was a system question, which changed my perspective on them.

(Bonus) - Seeding a Conversation

I mentioned that the other person should own the agenda, but sometimes, they may not have much on their mind or a topic that sticks out to them. For these cases, I come prepared with a list of questions to help start the conversation.

pedestrians looking distracted in downtown
Credit to Markus Spiske via Unsplash

In Warren Berger's The Book of Beautiful Questions, he talks about the power of open-ended questions and how they can help people be more open. So instead of asking, "How's the project coming along?", which can be answered in a binary fashion, we could instead ask, "What's one thing about the project that stands out to you?", which could give us a richer response and allow you to learn more.

For those looking for questions, here's an excerpt of questions I've used to help seed a conversation:

  • What's one thing about the current project that turned out to be harder than you thought?
  • What was your biggest win last week, and why does it stand out?
  • What's something that happened in the past week that you want to learn more about?
  • I know you've been working more with this past week, talk to me about that experience
  • What's something that you're looking forward to?

Wrapping Up

To build great teams, you will need to foster relationships with the team and the individual members. Having one-on-ones can help with build these relationships, but only if you treat them like so. By allowing your teammate to set the agenda, giving them your full attention, and cutting out project updates, you can improve the quality of your conversations and get to know them better.

Building Relationships with Intent

As a leader, one of your superpowers is how you can connect your team with someone else. For example, if your team is struggling to work with an API and you know someone who's made recent changes or is a Subject Matter Expert, you can get your team unblocked and moving faster.

I've worked with leaders like this, and it's amazing how fast and helpful it is to get unstuck quickly. Don't get me wrong, sometimes it's the right strategy to "burn the time" to learn, but it's helpful to know someone who can get you unstuck if you need it.

With one leader, it seemed like they always knew a guy, no matter the topic, and I was amazed at how they did it. So like a lifelong learner, I asked, and they told me networking.

large group of people networking
Credit to ProductSchool via Unsplash

Ugh

If you're like me, you hear networking and think about dozens (if not hundreds) of people milling around, introducing themselves, and sharing business cards. Don't get me wrong, that approach can work for some people, but to me, that sounds exhausting. Instead of a hummingbird, going from flower to flower, I'm more like a bear. I just want to sit down and eat my jar of honey.

So what am I supposed to do? How am I supposed to network if I don't like large groups?

Given time, you can work on becoming more comfortable in large groups. However, why fight your natural tendencies and what you're good at?

For me, it's small groups and one-on-one conversations. As such, that's my approach to networking. Though it takes a bit longer, I find that I build stronger relationships with those people, and in turn, can be just as successful. I like to think about one-on-ones as lazy river conversations as I never know where it will take us.

man sitting in inner tube
Credit to Kiara Kulikova via Unsplash

One Cup of Coffee

At a previous company, we used Slack and, as such, had an integration called Random Coffee that would pair the members of a channel up to get together for the week. Such a simple idea, but so powerful when you now have a built-in excuse to chat with someone.

Coffee Chat
Credit to Priscilla Du Preez via Unsplash

After a couple of weeks of getting to know people, I started learning what others did, what interests them, and who to go to about specific issues. Combine that with asking, "How did you know that?" I found that I could quickly fill gaps in my knowledge.

But something else happened. Once I knew the person, I didn't see them as a name in the chat anymore, I saw them as their selves. I'd find myself saying, "Oh, it's Chris, and Chris is cool, so I'll help him out," instead of thinking, "Ugh, another thing to do." In a way, these conversations humanized those I worked with, and I found myself caring more about them.

Caring By Knowing

To me, this is the most important thing about relationship building and networking. Deep down, I want to care about those I work with because I want them to be successful. I can't help them be successful if I don't know them both as a colleague and as a person.

Having one-on-ones is how I know my people and how I continue building care. It might be as simple as knowing what types of things they like to work on or what they did over the weekend. However, having these conversations helps both of us open up, and I get to know them so much better. Once I know them, I can guide and direct them better, looking for opportunities I wouldn't have thought of before.

How Do I Start?

If you want to start this for your own company, you don't have to have Slack to make this happen. The important thing is getting buy-in from others and explaining the why behind the exercise.

Once you have buy-in, you can start low-tech by using an Excel sheet and randomizing the list of names. This isn't the most robust solution, but it's a start and you can iterate as you figure out the timing, the frequency, and the other steps that setting this up would look like.

Once you've got something in motion, you can always work on automating the process later. Don't let a perfect solution stop you from starting with a good solution.

What I've found successful is having either a weekly or fortnightly scheduled message in our main channel that assigns the groups. From there, participants are encouraged to share something they've learned about their counterparts during their conversation. To help make sure that people meet, scheduling a set time during the week for all the groups can be helpful as it removes another barrier (e.g., if you know that you'll have coffee at 10:30 am on Tuesdays, you learn to expect it).

If you have a group that is just starting, it might be helpful to provide some starting questions to help jump-start the conversation. A good list of questions can be found in one of my gists.

Wrapping Up

To be a successful leader, you must cultivate and grow relationships with those you work with. Not only does it help your team be successful, but it allows you to have a richer experience with your work and helps solidify that we're all working together.

Better Domain Modeling with Discriminated Unions

When I think about software, I like designing software so that doing the right things are easy and doing the wrong things are impossible (or at least very hard). This approach is typically called falling into the pit of success.

Having a well-defined domain model can prevent many mistakes from happening just because the code literally won't let it happen (either through a compilation error or other mechanisms).

I'm a proponent of functional programming as it allows us to model software in a better way that can reduce the number of errors we make.

Let's at one of my favorite techniques discriminated unions.

Motivation

In the GitHub API, there's an endpoint that allows you to get the events that have occurred for a pull request.

Let's take a look at the example response in the docs.

[
  {
    "id": 6430295168,
    "url": "https://api.github.com/repos/github/roadmap/issues/events/6430295168",
    "event": "locked",
    "commit_id": null,
    "commit_url": null,
    "created_at": "2022-04-13T20:49:13Z",
    "lock_reason": null
  },
  {
    "id": 6430296748,
    "url": "https://api.github.com/repos/github/roadmap/issues/events/6430296748",
    "event": "labeled",
    "commit_id": null,
    "commit_url": null,
    "created_at": "2022-04-13T20:49:34Z",
    "label": {
      "name": "beta",
      "color": "99dd88"
    }
  },
  {
    "id": 6635165802,
    "url": "https://api.github.com/repos/github/roadmap/issues/events/6635165802",
    "event": "renamed",
    "commit_id": null,
    "commit_url": null,
    "created_at": "2022-05-18T19:29:01Z",
    "rename": {
      "from": "Secret scanning: dry-runs for enterprise-level custom patterns (cloud)",
      "to": "Secret scanning: dry-runs for enterprise-level custom patterns"
    }
  }
]

Based on the name of the docs, it seems like we'd expect to get back an array of events, let's call this TimelineEvent[].

Let's go ahead and define the TimelineEvent type. One approach is to start copying the fields from the events in the array. By doing this, we would get the following.

type TimelineEvent = {
  id: number;
  url: string;
  event: string;
  commit_id?: string;
  commit_url?: string;
  created_at: string;
  lock_reason?: string;
  label?: {
    name: string;
    color: string;
  };
  rename?: {
    from: string;
    to: string;
  };
};

The Problem

This definition will work, as it will cover all the data. However, the problem with this approach is that lock_reason, label, and rename had to be defined as nullable as they can sometimes be specified, but not always (for example, the lock_reason isn't specified for a label event).

Let's say that we wanted to write a function that printed data about TimelineEvent, we would have to write something like the following:

1
2
3
4
5
6
7
8
9
function printData(event: TimelineEvent) {
  if (event.event === "labeled") {
    console.log(event.label!.name); // note the ! here, to tell TypeScript that I know it'll have a value
  } else if (event.event == "locked") {
    console.log(event.lock_reason);
  } else {
    console.log(event.rename!.from); // note the ! here, to tell Typescript that I know it'll have a value
  }
}

The main problem is that the we have to remember that the labeled event has a label property, but not the locked property. It might not be a big deal right now, but given that the GitHub API has over 40 event types, the odds of forgetting which properties belong where can be challenging.

The pattern here is that we have a type TimelineEvent that can have different, separate shapes, and we need a type that can represent all the shapes.

The Solution

One of the cool things about Typescript is that there is a union operator (|), that allows you to define a type as one of the other types.

Let's refactor our TimelineEvent model to use the union operator.

First, we need to define the different events as their own types

type LockedEvent = {
  id: number;
  url: string;
  event: "locked"; // note the hardcoded value for event
  commit_id?: string;
  commit_url?: string;
  created_at: string;
  lock_reason?: string;
};

type LabeledEvent = {
  id: number;
  url: string;
  event: "labeled"; // note the hardcoded value for event
  commit_id?: string;
  commit_url: string;
  created_at: string;
  label: {
    name: string;
    color: string;
  };
};

type RenamedEvent = {
  id: number;
  url: string;
  event: "renamed"; // note the hardcoded value for event
  commit_id?: string;
  commit_url?: string;
  created_at: string;
  rename: {
    from: string;
    to: string;
  };
};

At this point, we have three types, one for each specific event. A LockedEvent has no knowledge of a label property and a RenamedEvent has no knowledge of a lock_reason property.

Next, we can update our definition of TimelineEvent to use the union operator as so.

type TimelineEvent = LockedEvent | LabeledEvent | RenamedEvent;

This would be read as A TimelineEvent can either be a LockedEvent or a LabeledEvent or a RenamedEvent.

With this new definition, let's rewrite the printData function.

1
2
3
4
5
6
7
8
9
function printData(event: TimelineEvent) {
  if (event.event == "labeled") {
    console.log(event.label.name); // note that we no longer need !
  } else if (event.event == "locked") {
    console.log(event.lock_reason);
  } else {
    console.log(event.rename.to); // note that we no longer need !
  }
}

Not only do we not have to use the ! operator to ignore type safety, but we also have better autocomplete (note that locked_reason and rename don't appear when working with a labeled event). Better autocomplete

Deeper Dive

At a general level, what we've modeled is a sum type and it's great for when you have a type that can take on a finite number of differing shapes.

Sum types are implemented as either tagged unions or untagged unions. Typescript has untagged unions, however, other languages like Haskell and F#, use tagged unions. Let's see what the same implementation in F# would have looked like.

// specific type definitions omitted since they're
// similar to typescript definition
// ....
type TimelineEvent = Locked of LockedEvent | Labeled of LabeledEvent | Renamed of RenamedEvent

let printData e =
    match e with
    | Locked l -> printf "%s" l.lock_reason
    | Labeled l -> printf "%s" l.label.name
    | Renamed r -> printf "%s" r.rename.``to`` // the `` is needed here as to is a reserved word in F#

A tagged union is when each shape has a specific constructor. So in the F# version, the Locked is the tag for the LockedEvent, Labeled is the tag for the LabeledEvent, so on and so forth. In the Typescript example, we worked around it because the event property is on every TimelineEvent and is a different value.

If that wasn't true, then we would had to have added a field to TimelineEvent (typically called kind or tag) that would help us differentiate between the various shapes.

Wrapping Up

When defining domain models where the model can have different shapes, you can use a sum type to define the model.

Keeping Track - My Task Tracking Approach

When it comes to keeping track of things to do, I recall an ill-fated attempt at using a planner. My middle school introduced these planners for the students that you had to use to keep track of dates (and, weirdly enough, as a hall pass to go to the bathroom).

Looking back, the intent was to have the students be more organized, but that wasn't what I learned. I found it cumbersome and a pain to keep track of. Also, you had to pay to replace it if it was lost or stolen.

What I learned to do instead was to keep track of everything I needed to do in my memory, and if I forgot, well, I had to pay the penalty.

I recall seeing my peers in high school and college be much more organized, and they made it so simple. Just color code these things, add these other things to a book and highlight these things.

I didn't realize that my peers had developed a system for studying and keeping track of what they needed to do. Since I didn't know what it was called and felt awkward admitting I didn't know what it was, I would continue relying on my memory to get things done. However, this approach doesn't scale and is prone to having tasks drop from the list.

When I started working at my second professional job, I found my boss to be organized and meticulous, and he never let anything slip. I learned a ton from him about process improvement and was introduced to a Kanban board for the first time.

As an engineer, I would use a version of his approach for years, but when I got into leadership, I felt that I needed a better system. As an individual contributor, I could rely on the task board for what I needed to do, but that approach doesn't work for a leader because not all of your tasks are timely or fit in a neat Jira ticket.

Why a System?

Why do we need a system at all? Isn't memory good enough? The problem is that the human mind is fantastic at problem-solving but isn't great when it comes to recollection. In fact, multiple studies (like this one or this one) have shown that the more stressed you are, the worse your memory can become.

With this context, you need to have some system to get the tasks out of your head and stored elsewhere. Whether that's physical sticky notes in your office, a notebook that you use, or some other tooling, I don't particularly care, but you do need something.

My Approach

I'm loosely inspired by the Getting Things Done approach to task completion, which I've implemented as a Trello board. Having an online tool works for me because I can access it anywhere on my phone (no need to carry a notebook or other materials).

Another side effect of having an online tool is that at any point I have an idea or a task that I need to do, I can add it to my Trello board in two clicks. No more worries about remembering to add the task when I'm back home or in the office, which allows me to not stress about it.

Work Intake Process

On my Trello board (which you can copy a template from here), all tasks end up in the first column, called Inbox. The inbox is the landing spot for anything and everything. Throughout the day, I will process the list and move it to the appropriate column.

  • Is it a task that I can knock out in 5 minutes or less? Just do it!
  • Is it a task that will take more than 5 minutes? Then I move it into the To Do column
  • Is it a task that I might be interested in? Is it a bigger task that I need to think more about? Then that goes into the Some Day column
  • If the task is no longer needed, then it gets deleted.

Deciding What To Do Next

Once the inbox is emptied, I look at the items in the To Do column and pick the most important one. However, determining the most important one is not always the easy.

For this, I leverage the Eisenhower Matrix approach.

Named after Dwight D. Eisenhower, the idea is that we have two axes, one labeled Important and the other labeled Urgent. With these labels, tasks fall into four buckets:

  • Urgent and Important - (e.g., production broken, everything is on fire)
  • Urgent and Not Important - (e.g., last minute request, something that needs to be done, but not necessarily by you)
  • Not Urgent and Important - (e.g., strategic work, things that need to get done, but not necessarily this moment)
  • Not Urgent and Not Important - (e.g., time wasters, delete these tasks)
Eisenhower Matrix with four quadrants: Urgent & Important, Urgent & Not Important, Not Urgent & Important, and Not Urgent & Not Important
(2023, March 7). In Wikipedia. https://en.wikipedia.org/wiki/Time_management

Dealing with Roadblocks

In an ideal world, you could take an item and run it to completion, but things aren't always that easy. You might need help from another person or are waiting for someone to do their part.

When this happens, I'll move the item to the Waiting column and pick up a new task as I don't like to be stalled.

However, I keep an eye on the number of items in flight as I've found that if I have more than three items in flight, I struggle with making progress and spend my time context-switching between the items instead of completing work. It can be challenging if the tasks are wholly unrelated (development tasks, writing, and reviewing pull requests) as the cost of regaining the context feels higher than if the tasks are related (e.g., reviewing multiple pull requests for the same repository).

Getting Things Done

As items get completed, I add them to the Done column for the week. To help keep track of what I got done for the week, I typically call my Done column the week it spans (e.g., Apr 17-23, 2023). Once the week ends, I can refer back to the column, see where I spent my time, and reflect if I made the right choices for the week.

Finally, I'll archive the list, create a new column for next week and repeat.

Telling the Story: The Pitfall of a Single Data Point

Let's say that you're sitting down to read a new book, and you come across the following:

The King's Knave Inn was but a short distance from the Alverston train depot, just outside the town proper. (excerpt from The Infernal Machine by John Lutz)

After reading this, a friend interrupts your reading and asks your thoughts on the book so far. What would you say?

Most likely, you'd respond that you need to read more, and it's still too early to decide if the book is good or not.

To honestly answer this question, you would need to read more of the book (ideally all of it) to get a full picture of the story.

When measuring an engineer's performance and effectiveness, why don't we take the same approach?

My experience has been that leaders look for one or more metrics to quantify a person. At the face of it, I understand why, as it's hard to compare people if there aren't numbers.

However, the mistake I see leaders make is what they're trying to measure. For example, do you measure the number of pull requests? What about the number of stories completed in a sprint? How about the number of bugs shipped to production? Something else entirely?

The problem is that even if you use all of the above (please don't do this), you're still not seeing the whole picture, but only bits and pieces. This would be like reading five chapters at random from a book and then giving an opinion.

The other problem with using metrics is that the measurement will cease to be effective as people will start gaming the system (see Goodhart's Law).

For example, if we measure effectiveness by the number of completed pull requests, then what stops someone from creating hundreds of single-line pull requests that don't accomplish anything?

On the other side, what about the engineer who reduced scope and time because they knew how to simplify the approach or came up with a more straightforward solution? This insight won't show up as a pull request or a completed story; however, this should be rewarded just the same.

To really determine how effective someone is, we need to look at things holistically, which can be done by examining how well someone does in these three areas:

  • Understanding the problem (e.g., why are we doing this?)
  • Understanding the system (e.g., how are we doing this?)
  • Understanding the people (e.g., whom are we doing this with?)

By looking into these areas, you will see what your team is good at and where they could use coaching, helping you be more effective. You might also realize that your team is doing things that aren't so obvious.

You can't write a report to generate these metrics. To understand this, you have to understand your team and how they work together. This involves paying attention, taking notes, and being engaged. Passive leaders will struggle if they use this approach.

Understanding the Problem (The Why)

To be successful, we first have to understand the problem that's being solved. Without this base knowledge, it's impossible to build the right solution or even ask the right question to the problem at hand.

How comfortable are they within the problem domain? Do they know certain terminologies, our customers, the users, workflows, and expected behaviors?

Besides quizzing, there may not be an obvious way to measure this; however, here's my approach.

First, you can look at the questions that are being asked. Are they surface level or are they deep? You can see these questions through chats and meetings, comments on the stories or pull requests, and interactions with others.

Second, look at the solution they came up with. Did they design it with domain knowledge in mind? For example, are things named correctly? Did their solution take care of the main workflows? What about the edge cases?

Third, how are they handling support issues? Being on support is a quick way of learning a problem domain and system. As such, I'm looking at how much help they need and how they communicate with others.

By using this approach, you can get a good sense of how knowledgeable someone is in the problem domain without quizzing them.

Understanding the System (The How)

There's always a push to deliver more things, and in order to do that, we have to understand the current system, its limitations, what's easy vs. what's hard, and from these constraints, determine the correct path to take.

In addition, once the system is live, we need to support it. If we don't know the moving parts, what it interacts with, and how it's used, we're going to have a bad time.

Like understanding the problem, we can measure system knowledge without quizzing them. In particular, I've found pull request comments and code reviews to be insightful on someone's knowledge of the system.

For example, do they call out that there's already something in the system that does this new piece of functionality? Do they suggest taking a simpler approach with what we have? Do they propose a different solution altogether because the system has a limitation? All of these are indicators of someone's system knowledge.

Another way to gauge system knowledge is by looking at how the person handles support requests. If you can understand the problem, find the cause, and create a fix, then by definition, you have to have a solid understanding of the system.

Understanding the People (The Who)

When it comes to the third part of being effective, we have to measure how they work with those around them. Most people think engineering is a solitary line of work, and that can be true when it comes to the development phase.

However, in reality, engineers work with others to design, develop, and iterate on a solution, and this can only happen when working with others. As such, building these relationships are paramount to being successful.

If you want to go fast, go alone. If you want to go far, go together.

Measuring team cohesion can be difficult (it could be its own post), however, we start simply be getting peer feedback on the person. We can also look at the communication between them and others through their comments, messages, or meetings.

Another way to measure this is through your company's recognition system. Whether it's an email or some other tool your company uses, you need to keep tabs on these recognitions, as you can use them as a talking point during 1:1s and review time.

Wrapping Up

So, how do we measure how effective someone is? We know that a single data point isn't sufficient and that if we limit ourselves to metrics, we can get a skewed sense of the person. To know, we have to take a holistic approach.

To accomplish this goal, I recommend measuring the following areas:

  • Understanding the problem (e.g., why are we doing this?)
  • Understanding the system (e.g., how are we doing this?)
  • Understanding the people (e.g., whom are we doing this with?)

In each of these areas, we can get a sense by observing their interactions they have, the questions they ask, the approaches they take, and how likely people want to work with the individual.

Three Steps to Better Interviews

At some point in your career, you're going to be conducting interviews. Regardless of the role, you have the opportunity to shape the future of the company as your recommendation controls whether this person is going to be a colleague or not.

What a lot of people don't realize is that an interview is can be the first experience that someone has with your company. As such, you want this experience to be fantastic, even if they're not hired, as they could be a future customer of yours.

With interviewing to be so important (there are whole books about the subject), it's confounding to me when companies don't invest in training or resources to help grow their leaders into being better interviewers. Especially, when making a bad hire can cause so much damage and is expensive to resolve in the long run.

Over my career, I've seen my share of good and bad interviews and have some tips and tricks to improve your interviewing skills. In this post, I'm going to share three tips that help me have better conversations with candidates.

1. Build Better Conversations Using Scenarios

The first mistake I seen interviewers make is that they have a set of questions that they want to pepper the candidate with, in an effort to figure out if they're going to be a good fit or not.

An ideal interview should flow more like a conversation where the candidate is getting to know you and the company and where you are learning about the candidate. As such, a never-ending list of questions makes the candidate feel like they're being interrogated and it doesn't allow for a natural conversation. A great interview should feel like tennis, each player receiving and sending the ball to the other side.

For example, let's say that I want to know a candidates familiarity with REST APIs. I could ask questions like

What's the difference between GET and POST?

What's the difference between a 404 and 400 response code?

Even though I'll get answers, this is not much of a conversation, but more of a quiz. Instead, I ask the following

In this scenario, I'm a newer engineer sitting down to make some changes to one of your APIs and I seem to be running into some issues.

For example, when I invoke the endpoint via GET, I'm getting back a 404 (Not Found). Doing some digging, it seems like it's related to the resource not being there, but I'm not sure how to troubleshoot. What would you recommend?

With the above, the candidate has a clear problem (e.g. can't communicate with the API) and has plenty of space to talk about what they're thinking (incorrect route, API not running, etc..). As the candidate is talking things through, I'm getting more insight on what they know and how they think about things. For example, if they mention that a firewall could be blocking the request, I could dig into that a bit more and learn that they have knowledge in networking or cloud technologies.

Another advantage of this approach is that we can add more steps. For example, here's one of the scenarios I ask to measure understanding of REST APIs.

In this scenario, I'm a newer engineer sitting down to make some changes to one of your APIs and I seem to be running into some issues.

For example, when I invoke the endpoint via GET, I'm getting back a 404 (Not Found). Doing some digging, it seems like it's related to the resource not being there, but I'm not sure how to troubleshoot. What would you recommend?

I've fixed the typo, made another request, and I'm now getting a 401 (Unauthorized). Looking up the response code, this implies that I don't have permissions, but I'm stuck on next steps. What would you recommend for troubleshooting?

Oh right, Bearer Token, I remember reading that in the README, but I didn't understand at the time. After generating the token and making another request, I'm now getting a 400 (Bad Request). Looking up the status code, it seems like it's something related to the payload or route. How would you troubleshoot?

Finally! After fixing that issue, I was able to get a 200 (Ok) response back, thanks for the help!

By using the above scenario, I can learn quite a bit about what systems an engineer has worked with, what gaps they might have, and how they troubleshoot issues. This is a lot more effective than knowing if an engineer can tell the difference between GET and POST.

2. Build Better Conversations Using Open-Ended Questions

Another common mistake I see is asking closed-ended questions to gauge knowledge. Even though these are binary in nature (Yes/No) or have a specific answer (What's the capital of North Carolina?), they come off as interogative instead of conversational. In addition, these types of questions are informational and could easily be looked up, where as open-ended questions are opinion based and come from experience.

For example, if we were to ask:

What's the difference between an Observable and a Promise?

We would know if the candidate knows the difference or not and that's about it. Even though this knowledge is helpful, we could learn this (and more) by rephrasing it to be open-ended instead.

For example, if we were to ask:

When would you use an Observable over a Promise?

With this question, not only do we learn if the candidate can talk about Observable vs Promise, but we also know if they know which scenarios to use one over the other.

For more effective questions, we could turn this question into a scenario, by asking the following

Let's say that we're working on a web component that has to call an API to get some data. It looks like we could call the API and have the value returned be either an Observable or a Promise. What would you recommend and why?

In this scenario, we get to learn if the candidate knows the differences between Observable and Promise, can reason about why one approach would be better than another, and explain that to another engineer. No matter their choice, we could follow up by asking why they wouldn't pick the other one option.

3. Build Better Conversations By Asking For Examples

For the final mistake, I see interviewers ask some form of a leading question, where based on the phrasing of the question, the candidate would be pressured or coerced into answering a particular way.

For example

This position involves mentoring interns to be associate engineers. Is that something you're comfortable with?

This is a leading question because if the candidate were to say "No", then they would believe that they wouldn't get the job. So they would always answer yes, regardless of how they feel, which makes this question useless as it doesn't tell us anything about the candidate. Most leading questions tend to also be close-ended questions, so a double strike for this style of interviewing.

But Cameron! I need to know this information as this person would be responsible for coaching up our engineers! Cool, then let's tell the candidate, but let's also provide some context and allow them to tell us their experience.

For example, we could phrase the question this way

One of the responsibilities for the role is to help grow interns into associate engineers so we can grow terrific engineers internally. With this context, can you walk us through a time where you had to coach someone up? What was your approach? What would you do differently?

With this question, you've still mentioned the skill you're looking for, however, you've added context on the "why" behind the question and you've set the candidate up to talk about their experience, which in turn, gives you more context about the person.