Skip to content

Index

How Using Vertical Slicing Can Minimize Dependencies and Deliver Value Faster

How do we break down this work?

It's a good question and it can help set the tone for the project. Assuming the work is more than a bug fix, it's natural to look at a big project and break it down to smaller, more approachable pieces.

Depending on how you break down the work, you can dramatically change the timeline from when you can get feedback from your users and find issues much sooner.

In this post, let's look at a team breaking down a new feature for their popular application, TakeItEasy.

A New Day - A New Feature

It's a new sprint and your team is tackling a highly requested feature for TakeItEasy, the ability to setup a User Profile. Everyone is clear on the business requirements as we need the ability to save and retrieve the following information so that we can personalize the application experience for the logged in user:

  • Display Name
  • Name
  • Email Address
  • Profile Picture

Going over the high level design with the engineers, it's discovered that there's not a way to save this data right now. In addition, we don't have a way to display this data for the user to enter or modify.

Breaking Work Down as Horizontal Layers

Working with the team, the feature gets broken down as the following stories:

  • Create the data storage
  • Create the data access layer
  • Create the User Profile screen

Once these stories are done, this feature is done and that seems easy enough. As you talk with the team though, a few things stand-out to you.

  1. None of these stories are fully independent of each other. You can build out the User Profile screen, but without the Data Access Layer, it's incomplete. Same thing with the data access layer, it can't be fully complete until the data storage story is done.

  2. It's difficult to demo the majority of the stories. Stakeholders don't care about data storage or the data access layer, but they do care about the user setting up their profile. With the current approach, it's not possible to demo any work until all three stories are done.

As you approach each story, they seem to be quite large:

  1. For the Data Storage work, it's an upgrade script to modify the Users table with nullable columns.
  2. For the data access story, it's updating logic to retrieve each of the additional fields and making sure to handle missing values from the database.
  3. For the User Profile screen, it's creating a new page, update the routing, and adding multiple controls with validation logic for each of the new fields.

Is there a different way we can approach this work such that we can deliver something useful sooner?

Breaking Down the Work as Vertical Slices

The main issue with the above approach is that there's a story for each layer of the application (data, business rules, and user interface) and each of these layers are dependent upon each other. However, no one cares about any single layer, they care about all of it working together.

Two People Eating Nachos
Seriously, could you imagine enjoying a plate of nachos by first eating all the chips, then the beans, then the salsa?
Photo by Herson Rodriguez on Unsplash

One way to solve this problem would be to have a single story Implement User Profile that has all this work, but that sounds like more than a sprints worth of work. We know that that the more work in a story, the harder it is to give a fair estimate for what's needed.

Another approach to solve this problem is by changing the way we slice the work by taking a bit of each layer into a story. This means that we'll have a little bit of database, little bit of data access, and little bit of the user interface.

If we take this approach, we would have the following stories for our User Profile feature.

Feature: Implement User Profile

  • Story: Implement Display Name
  • Story: Implement Name
  • Story: Implement Email
  • Story: Implement Profile Picture

Each story would have the following tasks:

  • Add storage for field
  • Update data access to store/retrieve field
  • Update interface with control and validation logic

There are quite a few advantages with this approach.

First, instead of waiting for all the stories to get done before you can demo any functionality, you can demo after getting one story completed. This is huge advantage because if things are looking well, you could could potentially go live with one story instead of waiting for all three stories from before.

Second, these stories are independent of each other as the work to Implement Display Name doesn't depend on anything from Implement Email. This increases the autonomy of the team and allows us to shift priorities easier as at the end of any one story, we can pick any of the remaining stories.

For example, let's say that after talking more with customers, we need a way for them to add their favorite dessert. Instead of the business bringing in the new requirement and pushing back the timeline, engineering can work on that functionality next and get that shipped sooner.

Third, it's much easier to explain to engineers and stakeholders for when a certain piece of functionality will be available. Going back to horizontal layering, it's not clear when a user would be able to set-up their email address. Now, it's clear when that work is coming up.

Why The Horizontally Slicing?

I'm going to let you on a little secret. Most engineers are technically strong, but can be ignorant of the business domain that they're solving in. Unless you're taking time to coach them on the business (or if they've been working in the domain for a long period of time), engineers just don't know the business.

As such, it's difficult for engineers to speak in the ubiquitous language of the business, it's much easier to speak in the technical details. This, in turn, leads to user stories that are more technical details in nature (modify table, build service, update pipeline) instead of user focused (can set display name, can set email address).

If you're an Engineer, you need to learn the business domain that you're working in. This will help you prevent problems from happening in your software because it literally can't do that. In addition, this will help you see the bigger picture and build empathy with your users as you understand them better.

If you're in Product or Business, you need to work with your team to level up their business domain. This can be done by having them use the product like a user, giving them example tasks, and spending time to talk about the domain. If you can get the engineers to be hands-on, every hour you invest here is going to pay huge dividends when it comes time to pick up the next feature.

Wrapping Up

The next time you and the team have a feature, try experimenting with vertically slicing your stories and see how that changes the dynamics of the team.

To get started, remember, focus on the user outcomes and make sure that each story can stand independently of one another.

If this post resonated with you, I'd like to hear from you! Feel free to drop me a line at CoachingCorner@TheSoftwareMentor.com!

Today I Learned – The Difference Between Bubble and Capture for Events

I've recently been spending some time learning about Svelte and have been going through the tutorials.

When I made it to the event modifiers section, I saw that there's a modifier for capture where it mentions firing the handler during the capture phase instead of the bubbling phase.

I'm not an expert on front-end development, but I'm not familiar with either of these concepts. Thankfully, the Svelte docs refer out to MDN for a better explanation of the two.

What is Event Bubbling?

Long story short, by default, when an event happens, the element that's been interacted with will fire first and then each parent will receive the event.

So if we have the following HTML structure where there's a body that has a div that has a button

1
2
3
4
5
6
<body>
  <div id="container">
    <button>Click me!</button>
  </div>
  <pre id="output"></pre>
</body>

With the an event listener at each level:

// Setting up adding string to the pre element
const output = document.querySelector("#output");
const handleClick = (e)=> output.textContent += `You clicked on a ${e.currentTarget.tagName} element\n`;

const container = document.querySelector("#container");
const button = document.querySelector("button");

// Wiring up the event listeners
document.body.addEventListener("click", handleClick);
container.addEventListener("click", handleClick);
button.addEventListener("click", handleClick);

And we click the button, our <pre> element will have:

1
2
3
You clicked on a BUTTON element
You clicked on a DIV element
You clicked on a BODY element

What is Event Capturing?

Event Capturing is the opposite of Event Bubbling where the root parent receives the event and then each inner parent will receive the event, finally making it to the innermost child of the element that started the event.

Let's see what happens with our example when we use the capture approach.

1
2
3
4
// Wiring up the event listeners
document.body.addEventListener("click", handleClick, {capture:true});
container.addEventListener("click", handleClick, {capture:true});
button.addEventListener("click", handleClick, {capture:true});

After clicking the button, we'll see the following messages:

1
2
3
You clicked on a BODY element
You clicked on a DIV element
You clicked on a BUTTON element

Why Would You Use Capture?

By default, events will work in a bubbling fashion and this intuitively makes sense to me since the element that was interacted with is most likely the right element to handle the event.

One case that comes to mind is if you finding yourself attaching the same event listener to every child element. Instead, we could move that up.

For example, let's say that we had the following layout

1
2
3
4
5
6
7
<div>
    <ul style="list-style-type: none; padding: 0px; margin:0px; float:left">
      <li><a id="one">Click on 1</a></li>
      <li><a id="two">Click on 2</a></li>
      <li><a id="three">Click on 3</a></li>
    </ul>
  </div>
li {
  list-style-type: none;
  padding: 0px;
  margin:0px;
  float:left
}

li a {
  color:black;
  background:#eee;
  border: 1px solid #ccc;
  padding: 10px 15px;
  display:block
}
Which gives us the following layout

With this layout, let's say that we need to do some business rules for when any of those buttons are clicked. If we used the bubble down approach, we would have the following code:

// Stand-in for real business rules
function handleClick(e) {
  console.log(`You clicked on ${e.target.id}`);
}
// Get all the a elements
const elements = document.querySelectorAll("a");
// Wire up the click handler
for (const e of elements) {
  e.addEventListener("click", handleClick);
}

This isn't a big deal with three elements, but let's pretend that you had a list with tens of items, or a hundred items. You may run into a performance hit because of the overhead of wiring up that many event listeners.

Instead, we can use one event listener, bound to the common parent. This can accomplish the same affect, without as much complexity.

Let's revisit our JavaScript and make the necessary changes.

// Stand-in for real business rules
function handleClick(e) {
  // NEW: To handle the space of the unordered list, we'll return early
  // if the currentTarget is the same as the original target
  if (e.currentTarget === e.target) {
    return;
  }
  console.log(`You clicked on ${e.target.id}`);
}
// NEW: Getting the common parent
const parent = document.querySelector("ul");
// NEW setting the eventListener to be capture based
parent.addEventListener("click", handleClick, {capture:true});

With this change, we're now only wiring up a single listener instead of having multiple wirings.

Wrapping Up

In this post, we looked at two different event propagation models, bubble and capture, the differences between the two and when you might want to use capture.

Five Tips to Have More Effective Meetings

As a leader it's inevitable that you will have to organize a meeting. Whether it's for updates, 1-1s, or making decisions, the team is looking towards you to lead the conversation and have it be a good use of time.

But how do you have a good meeting? That's not something that's covered in leadership training. Is it the perfect invite? A well honed pitch? Throw something out there and see if it sticks?

Like anything else, a good meeting needs some preparation, however, if you follow these five tips, I guarantee your meetings will be better than before.

Step 1: Does It Even Need to Be a Meeting?

The best kind of meeting is the one that didn't have to happen. Have you ever sat through a meeting where everyone did a bunch of talking, you halfway listened and thought to yourself, "this could have been an email?"

man looking out window
Photo by BRUNO CERVERA on Unsplash

Been there, done that have, and have the t-shirt.

When I think about why we need meetings, it's because we're trying to accomplish something that one person alone couldn't get done. With this assumption in mind, I find that meetings take one of two shapes: sharing information (e.g., stand-ups, retrospectives, all-hands) or to make a decision (e.g., technical approach, ironing the business rules).

Depending on what you're trying to accomplish, then next thought is determine if the communication needs to be synchronous (get everyone together) or asynchronous (let people get involved at their own pace).

For example, if the team has been struggling in getting work done, then it makes sense to have a meeting to figure out what's happening and ensure that everyone is hearing the exact wording/tone of the messaging.

On the other hand, if your intent is to let the team know that Friday is a holiday, then that can be done through email or message in your chat tool.

One way you can figure out if the meeting could have been an email is to pretend it was a meeting and you canceled it. Is there anything that can't proceed? If not, then maybe you don't need that meeting.

Step 2: How Do We Even Know If We're Successful?

Have you ever attended a meeting and didn't know what it was about or why you met? These types of meetings typically suffer from not having a goal or purpose behind the meeting.

Recall from Step #1, we're meeting because there's something that we need from the group that we couldn't do as individuals. So what is it?

When scheduling the meeting, include the purpose (here's why we're meeting) and the goal (here's how we know if we're done) to the description. Not only is this a great way to focus the meeting, it can also serve as a way for people to know if they need to attend or not.

dartboard with darts in it
Photo by Afif Ramdhasuma on Unsplash

This is also a good litmus test to see if you know why there should be a meeting as this forces you to think about the problem being solved and how it should happen. If you're struggling to determine the purpose and the goal, then you're attendees will also struggle.

Step 3: Do You Have The Right People?

A common mistake I see people make is that they invite everyone who has a stake or passing interest in the topic which can make for a large (10+ people) meeting.

Even though the intent is good (give everyone visibility), this is a waste because the more people you have in a meeting, the less effective it will be. A meeting with four people will have a better conversation and get things done more than a meeting with nine people.

Let's pretend that you're at a large party and you see a group that you know, so you walk up to the group, hoping to break into the conversation.

As more people join in the group, they're going to naturally split up into smaller groups, each with their own conversations. The main reason is that the large the group, the less likely you have a chance to participate and get involved. So you might start a conversation with 1 or 2, split off and then start a new group.

Meetings have the same problem. The large the group, the more likely that side conversations will happen and it makes it harder for you to facilitate and keep everyone on track.

To keep meetings effective, be sure to only include the necessary people. For example, instead of inviting an entire team, invite only 1 or 2 people.

At a high level, you need the these three roles filled to have a successful meeting

  1. The Shot Caller - This is the main stakeholder and can approve our decisions. Without their buy-in, no real decision can be made.
  2. The Brain Trust - These are the people who have the details and can drive the conversation. You want to keep this group as tightly focused as possible.
  3. The Facilitator - Generally the organizer, this is the person who ensures that the goal is achieved and keeps the meeting running.

One way to narrow down the invite list is to answer this question:

If this person can't make the meeting, then we can't meet.

If you can't accomplish the goal without them, then they need to be there. I'm such a believer in this advice that if it's the day of the meeting and we don't have the Shot Caller or the Brain Trust, then I'll reschedule the meting as I'd rather move it than waste everyone's time.

Woman presenting task board in front of team
Photo by Jason Goodman on Unsplash

Step 4: Running the Meeting

It's the big day and you've got everyone in the room, now what?

In Step #2, we talked about having a purpose and goal for the meeting. Now is when we vocalize these two things to kick the meeting off. From there, we can seed the conversation with one of these strategies:

  • Asking an opening question to prime the Brain Trust.
  • Throwing to the Shot Caller to frame any restrictions the attendees need to be aware of.
  • Start with a specific person to kick the conversation off.

Once the conversation starts flowing, your job is to keep the meeting on track. For those who've played games like Dungeons and Dragons, you're acting like a Game Master where you know the direction the meeting needs to go to (The Goal), but the attendees are responsible for getting there.

It can be challenging to keep the meeting on track if you're also driving the conversation, so pace yourself, take notes, and get others involved to keep the conversation going.

When leading longer meetings (more than 60 minutes), make sure to take a 10 minute break.

For attendees, this allows them to stretch their legs, take a bathroom break, and to stew on the conversation that's happened so far. For those who are "thinkers" than "reacters", this gives them time to compose their thoughts and have better conversations after the break.

As a facilitator, this gives you a way to think about the meeting so far, identify areas that the group needs to dig into, and if needed, it can break the conversation out of a rut.

Step 5: Wrapping Up - How Do Things Get Done?

As the meeting comes to a close, we need to make sure that action follows next. A meeting with no follow-up is a lot like a rocking chair. Plenty of motion, but no progress being made.

In order to make sure that next steps happen, make sure to define action items with attendees owning getting them done. Action items don't have to be complex, it could be as simple as:

  • Defining stories for the team
  • Sending summary notes to other stakeholder
  • Following up with Person about X.

When defining action items, be wary of items that are scheduling another meeting (e.g. let's schedule a meeting with Team Y to get their perspective). This implies that you didn't have the right people in the room (see Step 3). Also, remember, meetings are to get alignment or to come up with a solution, so what purpose does this follow up meeting have?

As the meeting wraps up, take a few moments to summarize the outcome, verbally ensure that actions items have been assigned and thank everyone for their attention and time.

Congratulations, You're an Expert With Meetings Now, Right?

Running effective meetings can be made easier if you take the time to do the necessary preparation. Even those these steps may seem heavy on the documentation, you'll find that it'll help you focus on the core problem at hand, which helps focus the group, which makes everyone that much better.

By following these five steps, you'll increase your chances of having a great meeting and as you gain more experience, you'll become more comfortable running them.

Scaling Effectiveness with Docs - Finding Stale Docs

In a previous post, I argued that to help your team be effective, you need to have up-to-date docs, and to have this happen, you need some way of flagging stale documentation.

In this series, I show you how you can automate this process by creating a seed script, a check script, and then automating the check script. In today's post, let's develop the check script.

Breaking Down the Check Script

At a high level, our script will need to perform the following steps:

  1. Specify the location to search.
  2. Find all the markdown files in directory.
  3. Get the "Last Reviewed" line of text.
  4. Check if the date is more than 90 days in the past.
  5. If so, print the file to the screen.

Specifying Location

Our script is going to search over our repository, however, I don't want our script to be responsible for cloning and cleaning up those files. Since the long term plan is for our script to run through GitHub Actions, we can have the pipeline be responsible for cloning the repo.

This means that our script will have to be told where to search and since it can't take in manual input, we're going to use an environment variable to tell the script where to search.

First, let's create a .env file that will store the path of the repository:

.env
REPO_DIRECTORY="ABSOLUTE PATH GOES HERE"

From there, we can start working on our script to have it use this environment variable.

index.ts
import { load } from "https://deno.land/std@0.195.0/dotenv/mod.ts";

await load({ export: true }); // this loads the env file into our environment

const directory = Deno.env.get("REPO_DIRECTORY");

if (!directory) {
  console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");
  Deno.exit();
}
console.log(directory);

If we were to run our Deno script with the following command deno run --allow-read --allow-env ./index.ts, we should see the environment variable getting logged.

Finding all the Markdown Files

Now that we have a directory, we need a way to get all the markdown files from that location.

Doing some digging, I didn't find a built-in library for doing this, but building our own isn't too terrible.

By using Deno.readDir/Sync, we can get all the entries in the specified directory.

From here, we can then recurse into the other folders and get their markdown files as well.

Let's create a new file, utility.ts and add a new function, getMarkdownFilesFromDirectory

utility.ts
export function getMarkdownFilesFromDirectory(directory: string): string[] {
  // let's get all the files from the directory
  const allEntries: Deno.DirEntry[] = Array.from(Deno.readDirSync(directory));

  // Get all the markdown files in the current directory
  const markdownFiles = allEntries.filter(
    (x) => x.isFile && x.name.endsWith(".md")
  );
  // Find all the folders in the directory
  const folders = allEntries.filter(
    (x) => x.isDirectory && !x.name.startsWith(".")
  );
  // Recurse into each folder and get their markdown files
  const subFiles = folders.flatMap((x) =>
    getMarkdownFilesFromDirectory(`${directory}/${x.name}`)
  );
  // Return the markdown files in the current directory and the markdown files in the children directories
  return markdownFiles.map((x) => `${directory}/${x.name}`).concat(subFiles);
}

With this function in place, we can update our index.ts script to be the following:

index.ts
import { load } from "https://deno.land/std@0.195.0/dotenv/mod.ts";
import { getMarkdownFilesFromDirectory } from "./utility.ts";

const directory = Deno.env.get("REPO_DIRECTORY");

if (!directory) {
  console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");
  Deno.exit();
}

const files = getMarkdownFilesFromDirectory(directory);
console.log(files);

Running the script with deno run --allow-read --allow-env ./index.ts, should get a list of all the markdown files being printed to the screen.

Getting the Last Reviewed Text

Now that we have each file, we need a way to get their last line of text.

Using Deno.readTextFile/Sync, we can get the file contents. From there, we can convert them to lines and then find the latest occurrence of Last Reviewed

Let's add a new function, getLastReviewedLine to the utility.ts file.

utility.ts
export function getLastReviewedLine(fullPath: string): string {
  // Get the contents of the file, removing extra whitespace and blank lines
  const fileContent = Deno.readTextFileSync(fullPath).trim();

  // Convert block of text to a array of strings
  const lines = fileContent.split("\n");

  // Find the last line that starts with Last Reviewed On
  const lastReviewed = lines.findLast((x) => x.startsWith("Last Reviewed On"));

  // If we found it, return the line, otherwise, return an empty string
  return lastReviewed ?? "";
}

Let's try this function out by modifying our index.ts file to display files that don't have a Last Reviewed On line.

index.ts
import { load } from "https://deno.land/std@0.195.0/dotenv/mod.ts";
import {
  getMarkdownFilesFromDirectory,
  getLastReviewedLine,
} from "./utility.ts";

const directory = Deno.env.get("REPO_DIRECTORY");

if (!directory) {
  console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");
  Deno.exit();
}

const files = getMarkdownFilesFromDirectory(directory);
files
  .filter((x) => getLastReviewedLine(x) !== "")
  .forEach((s) => console.log(s)); // print them to the screen

Determining If A Page Is Stale

At this point, we can get the "Last Reviewed On" line from a file, but we've got some more business rules to implement.

  • If there's a Last Reviewed On line, but there's no date, then the files needs to be reviewed
  • If there's a Last Reviewed On line, but the date is invalid, then the file needs to be reviewed
  • If there's a Last Reviewed On line, and the date is more than 90 days old, then the file needs to be reviewed.
  • Otherwise, the file doesn't need review.

We know from our filter logic that we're only going to be looking at lines that start with "Last Reviewed On", so now we need to extract the date.

Since we assume our format is Last Reviewed On, we can use substring to get the rest of the line. We're also going to assume that the date will be in YYYY/MM/DD format.

utility.ts
export function doesFileNeedReview(line: string): boolean {
  if (!line.startsWith("Last Reviewed On: ")) {
    return true;
  }
  const date = line.replace("Last Reviewed On: ", "").trim();
  const parsedDate = new Date(Date.parse(date));
  if (!parsedDate) {
    return true;
  }

  // We could something like DayJS, but trying to keep libraries to a minimum, we can do the following
  const cutOffDate = new Date(new Date().setDate(new Date().getDate() - 90));

  return parsedDate < cutOffDate;
}

Let's update our index.ts file to use the new function.

index.ts
import { load } from "https://deno.land/std@0.195.0/dotenv/mod.ts";
import {
  getMarkdownFilesFromDirectory,
  getLastReviewedLine,
} from "./utility.ts";

const directory = Deno.env.get("REPO_DIRECTORY");

if (!directory) {
  console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");
  Deno.exit();
}

getMarkdownFilesFromDirectory(directory)
  .filter((x) => getLastReviewedLine(x) !== "")
  .filter((x) => doesFileNeedReview(x))
  .forEach((s) => console.log(s)); // print them to the screen

And just like that, we're able to print stale docs to the screen. At this point, you could create a scheduled batch job and start using this script.

However, if you wanted to share this with others (and have this run not on your box), then stay tuned for the final post in this series where we put this into a GitHub Action and post a message to Slack!

Scaling Effectiveness with Docs - Seeding Dates

In a previous article, I argued that to help your team be effective, you need to have up-to-date docs, and to have this happen, you need some way of flagging stale documentation.

This process lends itself to being easily automated, so in this series of posts, we'll build out the necessary scripts to check for docs that haven't been reviewed in the last 90 days.

All code used in this post can be found on my GitHub.

Approach

To make this happen, we'll need to create the following:

  1. A seed script that will add a Last Reviewed Date to all of our pages.
  2. A check script that will check files for the Last Reviewed Date, returning which ones are either missing a date or are older than 90 days.
  3. Create a scheduled job using GitHub Actions to run our check script and post a message to our Slack channel.

For this post, we'll be creating the seed script.

Breaking Down the Seed Script

For this script to work, we need to be able to do two things:

  1. Determine the last commit date for a file.
  2. Add text to the end of the file.
  3. Getting a list of files in a directory.

To determine the last commit date for a file, we can leverage git and its log command (more on this in a moment). Since we're mainly doing file manipulation, we could use Deno here, but it makes much more sense to me to use something like bash or PowerShell.

Determining the Last Commit Date For a File

To make this automation work, we need to have a date for the Last Reviewed On footer. You don't want to set all the files to the same date because all the files will come up for review in one big batch.

So, you're going to want to stagger the dates. You can do this by generating random dates, but honestly, getting the last commit date should be "good" enough.

To do this, we can take advantage of git's log command with the --pretty flag.

We can test this out by using the following script.

1
2
3
4
5
file=YourFileHere.md
commitDate=$(git log -n 1 --pretty=format:%aI -- $file)
# formatting date to YYYY/MM/DD
formattedDate=$(date -d "$commitDate" "+%Y/%m/%d")
echo $formattedDate

Assuming the file has been checked into Git, we should get the date back in a YYYY/MM/DD format. Success!

Adding Text to End of File

Now that we have a way to get the date, we need to add some text to the end of the file. Since we're working in markdown, we can use --- to denote a footer and then place our text.

Since we're going to be appending multiple lines of text, we can use the cat command with here-docs.

1
2
3
4
5
6
7
8
9
file=YourFileHere.md
# Note the blank lines, this is to make sure that the footer is separated from the text in the file
# Note: The closing EOF has to be on its own line with no whitespace or other characters in front of it.
cat << EOF >> $file


---
Last Reviewed On: 2023/08/12
EOF

After running this script, we'll see that the file has appended blank lines and our new footer.

Combining Into a New Script

Now that we have both of these steps figured out, we can combine them into a single script like the following:

file=YourFileHere.md
commitDate=$(git log -n 1 --pretty=format:%aI -- $file)
# formatting date to YYYY/MM/DD
formattedDate=$(date -d "$commitDate" "+%Y/%m/%d")
# Note the blank lines, this is to make sure that the footer is separated from the text in the file
# Note: The closing EOF has to be on its own line with no whitespace or other characters in front of it.
cat << EOF >> $file


---
Last Reviewed On: $formattedDate
EOF

Nice! Given a file, we can figure out its last commit date and append it to the file. Let's make this more powerful by not having to hardcode a file name.

Finding Files In a Directory

At this point, we can update a file, but the file is hardcoded. But we're going to have a lot of docs to review, and we don't want to do this manually, so let's figure out how we can get all the markdown files in a directory.

For this exercise, we can use the find command. In our case, we need to find all the files with a .md extension, no matter what directory they're in.

directory=DirectoryPathGoesHere
find $directory -name "*.md" -type f

We're going to need to process each of these files, so some type of iteration would be helpful. Doing some digging, Bash supports a for loop, so let's use that.

1
2
3
4
directory=DirectoryPathGoesHere
for file in `find $directory -name "*.md" -type f`; do
  echo "printing $file"
done

If everything works, we should see each markdown file name being printed.

When a Plan Comes Together

We've got all the pieces, so let's bring this together:

directory=DirectoryPathGoesHere
for file in `find $directory -name "*.md" -type f`; do
  commitDate=$(git log -n 1 --pretty=format:%aI -- $file)
  # formatting date to YYYY/MM/DD
  formattedDate=$(date -d "$commitDate" "+%Y/%m/%d")
  # Note the blank lines, this is to make sure that the footer is separated from the text in the file
  # Note: The closing EOF has to be on its own line with no whitespace or other characters in front of it.
  cat << EOF >> $file


---
Last Reviewed On: $formattedDate
EOF
done

Bells and Whistles

This script works and we could ship this, however, it's a bit rough.

For example, the script assumes that it's in the same directory as your git repository. It also assumes that your repository is up-to-date and that it's safe to make changes on the current branch.

Let's make our script a bit more durable by making the following tweaks:

  1. Clone the repo to a new temp directory.
  2. Create a new branch for making changes.
  3. Commit changes and publish the branch.

Getting the latest version of the repo

For this step, let's add logic for creating a new temp directory and adding a call to git clone.

1
2
3
4
5
6
7
8
# see https://unix.stackexchange.com/questions/30091/fix-or-alternative-for-mktemp-in-os-x#answer-84980
# for why tmpDir is being created this way
docRepo="RepoUrlGoesHere"
tmpDir=$(mktemp -d 2>/dev/null || mktemp -d -t 'docSeed')
cd $tmpDir
echo "Cloning from $docRepo"
# Note the . here, this allows us to clone to the temp folder and not to a new folder of repo name
git clone "$docRepo" . &> /dev/null

Making a new branch and pushing changes

Now that we've got the repo, we can add the steps for switching branches, committing changes, and publishing the branch.

1
2
3
4
5
6
# ... code to clone repository
git switch -c 'adding-seed-dates'
# ... code to make file changes
git add --all
git commit -m "Adding seed dates"
git push -u origin adding-seed-dates

Final Script

Let's take a look at our final script:

#!/bin/bash
docRepo="RepoUrlGoesHere"
tmpDir=$(mktemp -d 2>/dev/null || mktemp -d -t 'docSeed')
cd $tmpDir

echo "Cloning from $docRepo"
git clone "$docRepo" . &> /dev/null

git switch -c 'adding-dates-to-files'

for file in `(find . -name "*.md" -type f)`; do
  echo "updating $file"
  commitDate="$(git log -n 1 --pretty=format:%aI -- $file)"
  formattedDate=$(date -d $commitDate "+%Y/%m/%d")
  cat << EOT >> $file


---
Last Reviewed On: $formattedDate
EOT
done
git add --all
git commit -m "Adding initial dates"
git push -u origin adding-dates-to-files

Wrapping Up

In this post, we wrote a bash script to clone our docs and add a new footer to every page with the file's last commit date. In the next post, we'll build the script that checks for stale files.

Coaching Corner Volume 4

Welcome to Cameron's Coaching Corner, where we answer questions from readers about leadership, career, and software engineering.

In this post, we'll look at a question posed by Bastien in the Engineering Manager's Slack Group on how to praise your team.

Context: New to Engineering Manager, managing 5 people and working in a 5 person team. My managees are not 100% on my team.

Details: OK, so I've quickly learnt how to spot mistakes and follow up improvements to both teams (one I manage and one I work on). I'm confident taking actions and communicating on all of that. But there is the other side -> congratulation and following up on behavior/action.

Example: The current team has low velocity. They recently finished the specs and review. It didn't happen for months (always late on that), but it's their "normal" velocity. I congratulated them, but I'm wondering if I should have since they "just did their job".

How do you congratulate your coworkers? Specifically

  • Do you? Why or Why Not?
  • How?
  • On trivial/exceptional stuff?
  • When and Where?

Having Coffee with Deno - Automating All the Things

Welcome to the final installment of our Deno series, where we build a script that pairs up people for coffee.

In the last post, we added the ability to post messages into a Slack channel instead of copying from a console window.

The current major problem is that we have to remember to run the script. We could always set up a cron job or scheduled task, however, what happens when we change machines? What if our computer stops working? What if someone else changes the script, how will we remember to get the latest and run it?

Having Coffee with Deno - Sharing the News

Welcome to the third installment of our Deno series, where we build a script that pairs up people for coffee.

In the last post, we're dynamically pulling members of the Justice League from GitHub instead of a hardcoded list.

Like any good project, this approach works, but now the major problem is that we have to run the script, copy the output, and post it into our chat tool so everyone knows the schedule.

It'd be better if we could update our script to post this message instead. In this example, we're going to use Slack and their incoming webhook, but you could easily tweak this approach to work with other tools like Microsoft Teams or Discord.

The Approach

In order to for the script to post to Slack, we'll need to make the following changes:

  1. Follow these docs from Slack to create an application and enable the incoming webhooks.
  2. Test that we can post a simple message to the channel
  3. From here, we'll need to add code to make a POST call to the webhook with a message
  4. Tweak the formatting so it looks nicer

Justice League Planning

Creating the Slack Application and creating the Webhook

For this step, we'll follow the instructions in the docs, ensuring that we're hooking it up to the right channel.

After following the steps, you should see something like the following:

Image of Slack App with Incoming Webhook

We can test that things are working correctly by running the curl command provided. If the message Hello World appears in the channel, congrats, you've got the incoming webhook created!

Modifying the Script to POST to Webhook

We have the Slack app created and verified that the incoming webhook is working, so we'll need to add this integration to our script.

Since we have this incoming webhook URL and Slack recommends treating this as a secret, we'll need to add this to our .env file.

Keep your webhook ULR safe image

.env
GITHUB_API_TOKEN="<yourTokenHere>"
SLACK_WEBHOOK="<yourWebHookHere>"

With this secret added, we can write a new function, sendMessage, that'll make the POST call to Slack. Since this is a new integration, we'll add a new file, slack.ts to put it in.

slack.ts
// Using axiod for the web connection
import axiod from "https://deno.land/x/axiod@0.26.2/mod.ts";

// function to send a message to the webhook
async function sendMessage(message: string): Promise<void> {
  // Get the webhookUrl from our environment
  const webhookUrl = Deno.env.get("SLACK_WEBHOOK")!;

  try {
    // Send the POST request
    await axiod.post(webhookUrl, message, {
      headers: {
        "Content-Type": "application/json",
      },
    });
  } catch (error) {
    // Error handling
    if (error.response) {
      return Promise.reject(
        `Failed to post message: ${error.response.status}, ${error.response.statusText}`
      );
    }
    return Promise.reject(
      "Failed for non status reason " + JSON.stringify(error)
    );
  }
}

export { sendMessage };

With sendMessage done, let's update index.ts to use this new functionality.

index.ts
import { load } from "https://deno.land/std@0.195.0/dotenv/mod.ts";
import {
  GetOrganizationMemberResponse,
  getMembersOfOrganization,
} from "./github.ts";
import { sendMessage } from "./slack.ts";
import { Pair, createPairsFrom, shuffle } from "./utility.ts";

await load({ export: true });

// Replace this with your actual organization name
const organizationName = "JusticeLeague";
const names = await getMembersOfOrganization(organizationName);
const pairs = createPairsFrom(shuffle(names));
const message = createMessage(pairs);
// Slack expects the payload to be an object of text, so we're doing that here for now
await sendMessage(JSON.stringify({ text: message }));

function createMessage(pairs: Pair<GetOrganizationMemberResponse>[]): string {
  const mapper = (p: Pair<GetOrganizationMemberResponse>): string =>
    `${p.first.login} meets with ${p.second.login}${
      p.third ? ` and ${p.third.login}` : ""
    }`;
  return pairs.map(mapper).join("\n");
}

And if we were to run the above, we can see the following message get sent to Slack.

Message from Random Coffee

Nice! We could leave it here, but we could make the message prettier (having an unordered list and italicizing names), so let's work on that next.

Pretty Printing the Message

So far, we could leave the messaging as is, however; it's a bit muddled. To help it pop, let's make the following changes.

  • Italicize the names
  • Start each pair with a bullet point

Since Slack supports basic Markdown in the messages, we can use the _ for italicizing and - for the bullet points. So let's modify the createMessage function to add this formatting.

index.ts
function createMessage(pairs: Pair<GetOrganizationMemberResponse>[]): string {
  // Let's wrap each name with the '_' character
  const formatName = (s: string) => `_${s}_`;

  const mapper = (p: Pair<GetOrganizationMemberResponse>): string =>
    // and start each pair with '-'
    `- ${formatName(p.first.login)} meets with ${formatName(p.second.login)}${
      p.third ? ` and ${formatName(p.third.login)}` : ""
    }`;
  return pairs.map(mapper).join("\n");
}

By making this small change, we now see the following message:

Formatted Slack Message with italics and bullets

The messaging is better, but we're still missing some clarity. For example, what date is this for? Or what's the purpose of the message? Looking through these docs, it seems like we could add different text blocks (like titles). So let's see what this could look like.

One design approach is to encapsulate the complex logic for dealing with Slack and only expose a "common-sense" API for consumers. In this regard, I think using a Facade pattern would make sense.

We want to expose the ability to set a title and to set a message through one or more lines of text. Here's what that code would look like

slack.ts
// This class allows a user to set a title and lines and then use the
// 'build' method to create the payload to interact with Slack

class MessageFacade {
  // setting some default values
  private header: string;
  private lines: string[];
  constructor() {
    this.header = "";
    this.lines = [];
  }

  // I like making these types of classes fluent
  // so that it returns itself.
  public setTitle(title: string): MessageFacade {
    this.header = title;
    return this;
  }
  public addLineToMessage(line: string | string[]): MessageFacade {
    if (Array.isArray(line)) {
      this.lines.push(...line);
    } else {
      this.lines.push(line);
    }
    return this;
  }

  // Here's where we take the content that the user provided
  // and convert it to the JSON shape that Slack expects
  public build(): string {
    // create the header block if set, otherwise null
    const headerBlock = this.header
      ? {
          type: "header",
          text: { type: "plain_text", text: this.header, emoji: true },
        }
      : null;
    // convert each line to it's own section
    const lineBlocks = this.lines.map((line) => ({
      type: "section",
      text: { type: "mrkdwn", text: line },
    }));
    return JSON.stringify({
      // take all blocks that have a value and set it here
      blocks: [headerBlock, ...lineBlocks].filter(Boolean),
    });
  }
}

With the facade in place, let's look at implementing this in index.ts

index.ts
1
2
3
4
5
6
7
8
9
// ... code to get the pairs and formatted lines

// using the facade with the fluent syntax
const message = new MessageFacade()
  .setTitle(`☕ Random Coffee ☕`)
  .addLineToMessage(formattedPairs)
  .build();

await sendMessage(message);

When we run the script now, we get the following message:

Random Coffee Message with Header and Icons

Wrapping Up

In this post, we changed our script from posting its Random Coffee message to the console window to instead posting it into a Slack channel using an Incoming Webhook. By making this change, we were able to remove a manual step (e.g., us copying the message into the channel), and we were able to use some cool emojis and better formatting.

In the final post, we'll take this one step further by automating the script using scheduled jobs via GitHub Actions.

As always, you can find a full working version of this bot on my GitHub.

Scaling Effectiveness with Docs

As a leader, I'm always looking for ways to help my team to be more efficient. To me, an efficient team is self-sufficient, able to find the information needed to solve their problems.

I've found that having up-to-date documentation is critical for a team because it scales out knowledge in asynchronously, removing the need for manual knowledge transfers.

For example, my team has a wiki that contains information for onboarding into our space, how to complete certain processes (requesting time off, resetting a password), how to run our Agile activities, and our support guidebook. At any point, if someone on the team doesn't know how to do something, they can consult the wiki and find the necessary information.

Docs. Why Did It Have to Be Docs?

I enjoy up-to-date documentation, but the main problem with them is that they captured the state of the world when they were written, but they don't react to changes. If the process for resetting your password changes, the documentation doesn't auto-update. So unless you're spending time reviewing the docs, they'll grow stale and be worthless, or even worse, mislead others to do the wrong things.

A good mental model for documentation is to think of them as a garden. When planted, it looks great, and everyone enjoys the environment. Over time, weeds will grow, and plants will become overgrown, causing the garden to be less attractive. Eventually, people will stop visiting, and the garden will go into disrepair. To prevent this, we must take care of the garden, removing the weeds and trimming the plants.

Outdoor green space

Photo by Robin Wersich via Unsplash.com

Alright, I get it, documentation is important, but my team has commitments, so how do we carve out time to review?

Cameron Learns About Document Control

I started my career in healthcare, and one of my first jobs was writing software for a medical diagnostic device. We were ISO 9001 certified, and the device was considered a Class II from the FDA. Long story short, this meant that we have to provide documentation for our device and software and also show that we were keeping things up to date.

To comply, we would find docs that hadn't been updated in a specific time period (like 90 days) and review them. If everything checked out, we'd bump up the review date. Otherwise, we'd make the necessary changes and revalidate the document.

At the time, all of our files were in Word, so it wasn't the easiest to search them (I recall that we had Outlook reminders, but this was many moons ago).

By baking this into our process, this helped make our work more visible, which in turn, gave us a better idea of the team's capacity for that sprint.

Thankfully, we have better technology than Word for sharing information, so how can we take this approach and bring it up to the modern day?

Modern Take on an Old Classic

First, I think that having your docs in source control is a great idea. If you're using tools like Git, you already have a way of leaving comments and keeping track of approvals through pull requests.

To make the most of Git, you should keep your changes in plaintext as it's easy to see the differences. and I enjoy using Markdown and tools like Mkdocs make this workflow possible.

With this figured out, our next step is to know when the file was last reviewed. We can do that by adding a new line to the bottom of each file, Last Reviewed On: YYYY/MM/DD. To come up with the initial date, we could use the last time the file was modified (thanks git log!).

At this point, we have a way to see the last time the file was reviewed, next step is to write a script that can find files that haven't been reviewed in the last 90 days. At a high level, we'd do the following:

  1. Get the latest for the doc repository.
  2. Get all the markdown files for the repository.
  3. Get the last line of the file.
  4. If the line doesn't start with Last Reviewed On:, we flag it for review as it's never been reviewed.
  5. If the line has a date, but it's older than 90 days, we flag it for review as it might be stale.
  6. Print all flagged files to the screen.

With the script created, we could manually run this on Mondays. But we're technical, right? Why not create a scheduled task to execute this script instead? This removes a manual task to be ran and it gives us visibility on what docs need reviewed.

Wrapping Up

When scaling your knowledge out, having great documentation is necessary as it allows your team to self-serve and work in a more asynchronous manner. The main problem with documentation is that it captures the state of the world when the docs were written, but they don't automatically update when the world changes.

Therefore, we need to have some process to flag and review stale docs. To ensure it gets done, we provide visibility by creating work items and committing to them during the sprint.

Having Coffee with Deno - Dynamic Names

Welcome to the second installment of our Deno series, where we build a script that pairs up people for coffee.

In the last post, we built a script that helped the Justice League meet up for coffee.

As of right now, our script looks like the following.

index.ts
const names = [
  "Batman",
  "Superman",
  "Green Lantern",
  "Wonder Woman",
  "Static Shock", // one of my favorite DC heroes!
  "The Flash",
  "Aquaman",
  "Martian Manhunter",
];
const pairs = createPairsFrom(shuffle(names));
const message = createMessage(pairs);
console.log(message);

function shuffle<T>(items: T[]): T[] {
  const result = [...items];
  for (let i = result.length - 1; i > 0; i--) {
    const j = Math.floor(Math.random() * i);
    [result[i], result[j]] = [result[j], result[i]];
  }
  return result;
}
type Pair<T> = { first: T; second: T; third?: T };
function createPairsFrom<T>(items: T[]): Pair<T>[] {
  if (items.length < 2) {
    return [];
  }
  const results = [];
  for (let i = 0; i <= items.length - 2; i += 2) {
    const pair: Pair = { first: items[i], second: items[i + 1] };
    results.push(pair);
  }
  if (items.length % 2 === 1) {
    results[results.length - 1].third = items[items.length - 1];
  }
  return results;
}
function createMessage(pairs: Pair<string>[]): string {
  const mapper = (p: Pair<string>) =>
    `${p.first} meets with ${p.second}${p.third ? ` and ${p.third}` : ""}`;

  return pairs.map(mapper).join("\n");
}

Even though this approach works, the major problem is that every time there's a member change in the Justice League (which seems to happen more often than not), we have to go back and update the list manually.

It'd be better if we could get this list dynamically instead. Given that the League are great developers, they have their own GitHub organization. Let's work on integrating with GitHub's API to get the list of names.

The Approach

To get the list of names from GitHub, we'll need to do the following.

  1. First, we need to figure out which GitHub endpoint will give us the members of the League. This, in turn, will also tell us what permissions we need for our API scope.
  2. Now that we have a secret, we need to update our script to read from an .env file.
  3. Once we have the secret being read, we can create a function to retrieve the members of the League.
  4. Miscellaneous refactoring of the main script to handle a function returning complex types instead of strings.

Justice League Planning

Laying the Foundation

Before we start, we should reactor our current file. It works, but we have a mix of utility functions (shuffle and createPairsFrom) combined with presentation functions (createMessage). Let's go ahead and move shuffle and createPairsFrom to their own module.

utility.ts
type Pair<T> = { first: T; second: T; third?: T };

function shuffle<T>(items: T[]): T[] {
  const result = [...items];
  for (let i = result.length - 1; i > 0; i--) {
    const j = Math.floor(Math.random() * i);
    [result[i], result[j]] = [result[j], result[i]];
  }
  return result;
}

function createPairsFrom<T>(items: T[]): Pair<T>[] {
  if (items.length < 2) {
    return [];
  }
  const results: Pair<T>[] = [];
  for (let i = 0; i <= items.length - 2; i += 2) {
    const pair: Pair<T> = { first: items[i], second: items[i + 1] };
    results.push(pair);
  }
  if (items.length % 2 === 1) {
    results[results.length - 1].third = items[items.length - 1];
  }
  return results;
}

export { createPairsFrom, shuffle };
export type { Pair };

With these changes, we can update index.ts to be:

index.ts
import { Pair, createPairsFrom, shuffle } from "./module.ts";

const names = [
  "Batman",
  "Superman",
  "Green Lantern",
  "Wonder Woman",
  "Static Shock", // one of my favorite DC heroes!
  "The Flash",
  "Aquaman",
  "Martian Manhunter",
];
const pairs = createPairsFrom(shuffle(names));
const message = createMessage(pairs);
console.log(message);

function createMessage(pairs: Pair<string>[]): string {
  const mapper = (p: Pair<string>) =>
    `${p.first} meets with ${p.second}${p.third ? ` and ${p.third}` : ""}`;

  return pairs.map(mapper).join("\n");
}

Getting GitHub

Now that our code is more tidy, we can focus on figuring out which GitHub endpoint(s) to use to figure out the members of the Justice League.

Taking a look at the docs, we see that there are two different options.

  1. Get members of an Organization
  2. Get members of a Team

What's the difference between the two? In GitHub parlance, an Organization is an overarching entity that consists of members which, in turn, can be part of multiple teams.

Using the Justice League as an example, it's an organization that contains Batman, and Batman can be part of the Justice League Founding Team and a member of the Batfamily Team.

Since we want to pair everyone up in the Justice League, we'll use the Get members of an Organization approach.

Working with Secrets

To interact with the endpoint, we'll need to create an API token for GitHub. Looking over the docs, our token needs to have the read:org scope. We can create this token by following the instructions here about creating a Personal Auth Token (PAT).

Once we have the token, we can invoke the endpoint with cURL or Postman to verify that we can communicate with the endpoint correctly.

After we've verified, we'll need a way to get this API token into our script. Given that this is sensitive information, we absolutely should NOT check this into the source code.

Creating an ENV File

A common way of dealing with that is to use an .env file which doesn't get checked in, but our application can use it during runtime to get secrets.

Let's go ahead and create the .env file and put our API token here.

.env
GITHUB_BEARER_TOKEN="INSERT_TOKEN_HERE"

Our problem now is that if we check git status, we'll see this file listed as a change. We don't want to check this in, so let's add a .gitignore file.

Adding a .gitignore File

With the .env file created, we need to create a .gitignore file, which tells Git not to check in certain files.

Let's go ahead and add the file. You can enter the below, or you can use the Node gitignore file (found here)

.gitignore
.env # ignores all .env files in the root directory

We can validate that we've done this correctly if we run git status and don't see .env showing up anymore as a changed file.

Loading Our Env File

Now that we have the file created, we need to make sure that this file loads at runtime.

In our index.ts file, let's make the following changes.

index.ts
1
2
3
4
5
6
7
8
9
import { config as loadEnv } from "https://deno.land/x/dotenv@v3.2.2/mod.ts";
// other imports

// This loads the .env file and adds them to the environment variable list
await loadEnv({ export: true });
// Deno.env.get("name") retrieves the value from an environment variable named "name"
console.log(Deno.env.get("GITHUB_BEARER_TOKEN"));

// remaining code

When we run the script now with deno run, we get an interesting prompt:

Deno requests read access to ".env".
- Requested by `Deno.readFileSync()` API.
- Run again with --allow-read to bypass this prompt
- Allow?

This is one of the coolest parts about Deno; it has a security system that prevents scripts from doing something that you hadn't intended through its Permissions framework.

For example, if you weren't expecting your script to read from the env file, it'll prompt you to accept. Since packages can be taken over and updated to do nefarious things, this is a terrific idea.

The permissions can be tuned (e.g., you're only allowed to read from the .env file), or you can give blanket permissions. In our cases, two resources are being used (the ability to read the .env file and the ability to read the GITHUB_BEARER_TOKEN environment variable).

Let's run the command with the allow-read and allow-env flags.

deno run --allow-run --allow-env ./index.ts

If the bearer token gets printed, we've got the .env file created correctly and can proceed to the next step.

Let's Get Dynamic

Now that we have the bearer token, we can work on calling the GitHub Organization endpoint to retrieve the members.

Since this is GitHub related, we should create a new file, github.ts, to host our functions and types.

Adding axiod

In the github.ts file, we're going to be use axiod for communication. It's similar to axios in Node and is better than then the built-in fetch API.

Let's go ahead and bring in the import.

github.ts
import axiod from "https://deno.land/x/axiod@0.26.2/mod.ts";

Calling the Organization Endpoint

With axiod pulled in, let's write the function to interact with the GitHub API.

github.ts
// Brining in the axiod library
import axiod from "https://deno.land/x/axiod@0.26.2/mod.ts";

async function getMembersOfOrganization(orgName: string): Promise<any[]> {
  const url = `https://api.github.com/orgs/${orgName}/members`;
  // Necessary headers are found on the API docs
  const headers = {
    Accept: "application/vnd.github+json",
    Authorization: `Bearer ${Deno.env.get("GITHUB_BEARER_TOKEN")}`,
    "X-GitHub-Api-Version": "2022-11-28",
  };

  try {
    const resp = await axiod.get<any[]>(url, {
      headers: headers,
    });
    return resp.data;
  } catch (error) {
    // Response was received, but non-2xx status code
    if (error.response) {
      return Promise.reject(
        `Failed to get members: ${error.response.status}, ${error.response.statusText}`
      );
    } else {
      // Response wasn't received
      return Promise.reject(
        "Failed for non status reason " + JSON.stringify(error)
      );
    }
  }
}

To prove this is working, we can call this function in the index.ts file and verify that we're getting a response.

index.ts
1
2
3
4
5
6
7
8
9
import { config as loadEnv } from "https://deno.land/x/dotenv@v3.2.2/mod.ts";
import { getMembersOfOrganization } from "./github.ts";
import { Pair, createPairsFrom, shuffle } from "./utility.ts";

await loadEnv({ export: true });

const membersOfOrganization = await getMembersOfOrganization("JusticeLeague");
console.log(JSON.stringify(membersOfOrganization));
// rest of the file

Now let's rerun the script.

deno run --allow-read --allow-env ./main.ts
Deno requests net access to "api.github.com"
- Requested by `fetch` API.
- Run again with --allow-net to bypass this prompt.

Ah! Our script is now doing something new (making network calls), so we'll need to allow that permission by using the --allow-net flag.

deno run --allow-read --allow-env --allow-net ./main.ts

If everything has worked, you should see a bunch of JSON printed to the screen. Success!

Creating the Response Type

At this point, we're making the call, but we're using a pesky any for the response, which works, but it doesn't help us with what properties we have to work with.

Looking at the response schema, it seems the main field we need is login. So let's go ahead and create a type that includes that field.

github.ts
type GetOrganizationMemberResponse = {
  login: string;
};

async function getMembersOfOrganization(
  orgName: string
): Promise<GetOrganizationMemberResponse[]> {
  //code
  const resp = await axiod.get<GetOrganizationMemberResponse[]>(url, {
    headers: headers,
  });
  // rest of the code
}

We can rerun our code and verify that everything is still working, but now with better type safety.

Cleaning Up

Now that we have this function written, we can work to integrate it with our index.ts script.

index.ts
import { config as loadEnv } from "https://deno.land/x/dotenv@v3.2.2/mod.ts";
import { getMembersOfOrganization } from "./github.ts";
import { Pair, createPairsFrom, shuffle } from "./utility.ts";

await loadEnv({ export: true });

const names = await getMembersOfOrganization("JusticeLeague");
const pairs = createPairsFrom(shuffle(names));
const message = createMessage(pairs);
console.log(message);

So far, so good. The only change we had to make was to replace the hardcoded array of names with the call to getMembersOfOrganization.

Not an issue, right?

Hmmm, what's up with this? createMessage has a type error

It looks like createMessage is expecting Pair<string>[], but is receiving Pair<GetOrganizationMemberResponse>[].

To solve this problem, we'll modify createMessage to work with GetOrganizationMemberResponse.

index.ts
// Need to update the input to be Pair<GetOrganizationMemberResponse>
function createMessage(pairs: Pair<GetOrganizationMemberResponse>[]): string {
  // Need to update mapper function to get the login property
  const mapper = (p: Pair<string>): string =>
    `${p.first.login} meets with ${p.second.login}${
      p.third ? ` and ${p.third.login}` : ""
    }`;

  return pairs.map(mapper).join("\n");
}

With this last change, we run the script and verify that we're getting the correct output, huzzah!

Current Status

Congratulations! We now have a script that is dynamically pulling in heroes from the Justice League organization instead of always needing to see if Green Lantern is somewhere else or if another member of Flash's SpeedForce is here for the moment.

A working version of the code can be found on GitHub.