In a previous post, I argued that to help your team be effective, you need to have up-to-date docs, and to have this happen, you need some way of flagging stale documentation.
In this series, I show you how you can automate this process by creating a seed script, a check script, and then automating the check script. In today's post, let's develop the check script.
Our script is going to search over our repository, however, I don't want our script to be responsible for cloning and cleaning up those files. Since the long term plan is for our script to run through GitHub Actions, we can have the pipeline be responsible for cloning the repo.
This means that our script will have to be told where to search and since it can't take in manual input, we're going to use an environment variable to tell the script where to search.
First, let's create a .env file that will store the path of the repository:
import{load}from"https://deno.land/std@0.195.0/dotenv/mod.ts";awaitload({export:true});// this loads the env file into our environmentconstdirectory=Deno.env.get("REPO_DIRECTORY");if(!directory){console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");Deno.exit();}console.log(directory);
If we were to run our Deno script with the following command deno run --allow-read --allow-env ./index.ts, we should see the environment variable getting logged.
exportfunctiongetMarkdownFilesFromDirectory(directory:string):string[]{// let's get all the files from the directoryconstallEntries:Deno.DirEntry[]=Array.from(Deno.readDirSync(directory));// Get all the markdown files in the current directoryconstmarkdownFiles=allEntries.filter((x)=>x.isFile&&x.name.endsWith(".md"));// Find all the folders in the directoryconstfolders=allEntries.filter((x)=>x.isDirectory&&!x.name.startsWith("."));// Recurse into each folder and get their markdown filesconstsubFiles=folders.flatMap((x)=>getMarkdownFilesFromDirectory(`${directory}/${x.name}`));// Return the markdown files in the current directory and the markdown files in the children directoriesreturnmarkdownFiles.map((x)=>`${directory}/${x.name}`).concat(subFiles);}
With this function in place, we can update our index.ts script to be the following:
import{load}from"https://deno.land/std@0.195.0/dotenv/mod.ts";import{getMarkdownFilesFromDirectory}from"./utility.ts";constdirectory=Deno.env.get("REPO_DIRECTORY");if(!directory){console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");Deno.exit();}constfiles=getMarkdownFilesFromDirectory(directory);console.log(files);
Running the script with deno run --allow-read --allow-env ./index.ts, should get a list of all the markdown files being printed to the screen.
Now that we have each file, we need a way to get their last line of text.
Using Deno.readTextFile/Sync, we can get the file contents. From there, we can convert them to lines and then find the latest occurrence of Last Reviewed
Let's add a new function, getLastReviewedLine to the utility.ts file.
exportfunctiongetLastReviewedLine(fullPath:string):string{// Get the contents of the file, removing extra whitespace and blank linesconstfileContent=Deno.readTextFileSync(fullPath).trim();// Convert block of text to a array of stringsconstlines=fileContent.split("\n");// Find the last line that starts with Last Reviewed OnconstlastReviewed=lines.findLast((x)=>x.startsWith("Last Reviewed On"));// If we found it, return the line, otherwise, return an empty stringreturnlastReviewed??"";}
Let's try this function out by modifying our index.ts file to display files that don't have a Last Reviewed On line.
import{load}from"https://deno.land/std@0.195.0/dotenv/mod.ts";import{getMarkdownFilesFromDirectory,getLastReviewedLine,}from"./utility.ts";constdirectory=Deno.env.get("REPO_DIRECTORY");if(!directory){console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");Deno.exit();}constfiles=getMarkdownFilesFromDirectory(directory);files.filter((x)=>getLastReviewedLine(x)!=="").forEach((s)=>console.log(s));// print them to the screen
At this point, we can get the "Last Reviewed On" line from a file, but we've got some more business rules to implement.
If there's a Last Reviewed On line, but there's no date, then the files needs to be reviewed
If there's a Last Reviewed On line, but the date is invalid, then the file needs to be reviewed
If there's a Last Reviewed On line, and the date is more than 90 days old, then the file needs to be reviewed.
Otherwise, the file doesn't need review.
We know from our filter logic that we're only going to be looking at lines that start with "Last Reviewed On", so now we need to extract the date.
Since we assume our format is Last Reviewed On, we can use substring to get the rest of the line. We're also going to assume that the date will be in YYYY/MM/DD format.
exportfunctiondoesFileNeedReview(line:string):boolean{if(!line.startsWith("Last Reviewed On: ")){returntrue;}constdate=line.replace("Last Reviewed On: ","").trim();constparsedDate=newDate(Date.parse(date));if(!parsedDate){returntrue;}// We could something like DayJS, but trying to keep libraries to a minimum, we can do the followingconstcutOffDate=newDate(newDate().setDate(newDate().getDate()-90));returnparsedDate<cutOffDate;}
Let's update our index.ts file to use the new function.
import{load}from"https://deno.land/std@0.195.0/dotenv/mod.ts";import{getMarkdownFilesFromDirectory,getLastReviewedLine,}from"./utility.ts";constdirectory=Deno.env.get("REPO_DIRECTORY");if(!directory){console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");Deno.exit();}getMarkdownFilesFromDirectory(directory).filter((x)=>getLastReviewedLine(x)!=="").filter((x)=>doesFileNeedReview(x)).forEach((s)=>console.log(s));// print them to the screen
And just like that, we're able to print stale docs to the screen. At this point, you could create a scheduled batch job and start using this script.
However, if you wanted to share this with others (and have this run not on your box), then stay tuned for the final post in this series where we put this into a GitHub Action and post a message to Slack!
In a previous article, I argued that to help your team be effective, you need to have up-to-date docs, and to have this happen, you need some way of flagging stale documentation.
This process lends itself to being easily automated, so in this series of posts, we'll build out the necessary scripts to check for docs that haven't been reviewed in the last 90 days.
All code used in this post can be found on my GitHub.
For this script to work, we need to be able to do two things:
Determine the last commit date for a file.
Add text to the end of the file.
Getting a list of files in a directory.
To determine the last commit date for a file, we can leverage git and its log command (more on this in a moment). Since we're mainly doing file manipulation, we could use Deno here, but it makes much more sense to me to use something like bash or PowerShell.
To make this automation work, we need to have a date for the Last Reviewed On footer. You don't want to set all the files to the same date because all the files will come up for review in one big batch.
So, you're going to want to stagger the dates. You can do this by generating random dates, but honestly, getting the last commit date should be "good" enough.
file=YourFileHere.md
commitDate=$(gitlog-n1--pretty=format:%aI--$file)## formatting date to YYYY/MM/DDformattedDate=$(date-d"$commitDate""+%Y/%m/%d")echo$formattedDate
Assuming the file has been checked into Git, we should get the date back in a YYYY/MM/DD format. Success!
Now that we have a way to get the date, we need to add some text to the end of the file. Since we're working in markdown, we can use --- to denote a footer and then place our text.
Since we're going to be appending multiple lines of text, we can use the cat command with here-docs.
file=YourFileHere.md
## Note the blank lines, this is to make sure that the footer is separated from the text in the file## Note: The closing EOF has to be on its own line with no whitespace or other characters in front of it.cat<< EOF >> $file---Last Reviewed On: 2023/08/12EOF
After running this script, we'll see that the file has appended blank lines and our new footer.
file=YourFileHere.md
commitDate=$(gitlog-n1--pretty=format:%aI--$file)## formatting date to YYYY/MM/DDformattedDate=$(date-d"$commitDate""+%Y/%m/%d")## Note the blank lines, this is to make sure that the footer is separated from the text in the file## Note: The closing EOF has to be on its own line with no whitespace or other characters in front of it.cat<< EOF >> $file---Last Reviewed On: $formattedDateEOF
Nice! Given a file, we can figure out its last commit date and append it to the file. Let's make this more powerful by not having to hardcode a file name.
At this point, we can update a file, but the file is hardcoded. But we're going to have a lot of docs to review, and we don't want to do this manually, so let's figure out how we can get all the markdown files in a directory.
For this exercise, we can use the find command. In our case, we need to find all the files with a .md extension, no matter what directory they're in.
We're going to need to process each of these files, so some type of iteration would be helpful. Doing some digging, Bash supports a for loop, so let's use that.
directory=DirectoryPathGoesHere
forfilein`find$directory-name"*.md"-typef`;docommitDate=$(gitlog-n1--pretty=format:%aI--$file)# formatting date to YYYY/MM/DDformattedDate=$(date-d"$commitDate""+%Y/%m/%d")# Note the blank lines, this is to make sure that the footer is separated from the text in the file# Note: The closing EOF has to be on its own line with no whitespace or other characters in front of it.cat<< EOF >> $file---Last Reviewed On: $formattedDateEOFdone
This script works and we could ship this, however, it's a bit rough.
For example, the script assumes that it's in the same directory as your git repository. It also assumes that your repository is up-to-date and that it's safe to make changes on the current branch.
Let's make our script a bit more durable by making the following tweaks:
## see https://unix.stackexchange.com/questions/30091/fix-or-alternative-for-mktemp-in-os-x#answer-84980## for why tmpDir is being created this waydocRepo="RepoUrlGoesHere"tmpDir=$(mktemp-d2>/dev/null||mktemp-d-t'docSeed')cd$tmpDirecho"Cloning from $docRepo"## Note the . here, this allows us to clone to the temp folder and not to a new folder of repo namegitclone"$docRepo".&>/dev/null
## ... code to clone repositorygitswitch-c'adding-seed-dates'## ... code to make file changesgitadd--all
gitcommit-m"Adding seed dates"gitpush-uoriginadding-seed-dates
In this post, we wrote a bash script to clone our docs and add a new footer to every page with the file's last commit date. In the next post, we'll build the script that checks for stale files.
Context: New to Engineering Manager, managing 5 people and working in a 5 person team. My managees are not 100% on my team.
Details: OK, so I've quickly learnt how to spot mistakes and follow up improvements to both teams (one I manage and one I work on). I'm confident taking actions and communicating on all of that. But there is the other side -> congratulation and following up on behavior/action.
Example: The current team has low velocity. They recently finished the specs and review. It didn't happen for months (always late on that), but it's their "normal" velocity. I congratulated them, but I'm wondering if I should have since they "just did their job".
How do you congratulate your coworkers? Specifically
Welcome to the final installment of our Deno series, where we build a script that pairs up people for coffee.
In the last post, we added the ability to post messages into a Slack channel instead of copying from a console window.
The current major problem is that we have to remember to run the script. We could always set up a cron job or scheduled task, however, what happens when we change machines? What if our computer stops working? What if someone else changes the script, how will we remember to get the latest and run it?
Welcome to the third installment of our Deno series, where we build a script that pairs up people for coffee.
In the last post, we're dynamically pulling members of the Justice League from GitHub instead of a hardcoded list.
Like any good project, this approach works, but now the major problem is that we have to run the script, copy the output, and post it into our chat tool so everyone knows the schedule.
It'd be better if we could update our script to post this message instead. In this example, we're going to use Slack and their incoming webhook, but you could easily tweak this approach to work with other tools like Microsoft Teams or Discord.
For this step, we'll follow the instructions in the docs, ensuring that we're hooking it up to the right channel.
After following the steps, you should see something like the following:
We can test that things are working correctly by running the curl command provided. If the message Hello World appears in the channel, congrats, you've got the incoming webhook created!
With this secret added, we can write a new function, sendMessage, that'll make the POST call to Slack. Since this is a new integration, we'll add a new file, slack.ts to put it in.
// Using axiod for the web connectionimportaxiodfrom"https://deno.land/x/axiod@0.26.2/mod.ts";// function to send a message to the webhookasyncfunctionsendMessage(message:string):Promise<void>{// Get the webhookUrl from our environmentconstwebhookUrl=Deno.env.get("SLACK_WEBHOOK")!;try{// Send the POST requestawaitaxiod.post(webhookUrl,message,{headers:{"Content-Type":"application/json",},});}catch(error){// Error handlingif(error.response){returnPromise.reject(`Failed to post message: ${error.response.status}, ${error.response.statusText}`);}returnPromise.reject("Failed for non status reason "+JSON.stringify(error));}}export{sendMessage};
With sendMessage done, let's update index.ts to use this new functionality.
import{load}from"https://deno.land/std@0.195.0/dotenv/mod.ts";import{GetOrganizationMemberResponse,getMembersOfOrganization,}from"./github.ts";import{sendMessage}from"./slack.ts";import{Pair,createPairsFrom,shuffle}from"./utility.ts";awaitload({export:true});// Replace this with your actual organization nameconstorganizationName="JusticeLeague";constnames=awaitgetMembersOfOrganization(organizationName);constpairs=createPairsFrom(shuffle(names));constmessage=createMessage(pairs);// Slack expects the payload to be an object of text, so we're doing that here for nowawaitsendMessage(JSON.stringify({text:message}));functioncreateMessage(pairs:Pair<GetOrganizationMemberResponse>[]):string{constmapper=(p:Pair<GetOrganizationMemberResponse>):string=>`${p.first.login} meets with ${p.second.login}${p.third?` and ${p.third.login}`:""}`;returnpairs.map(mapper).join("\n");}
And if we were to run the above, we can see the following message get sent to Slack.
Nice! We could leave it here, but we could make the message prettier (having an unordered list and italicizing names), so let's work on that next.
So far, we could leave the messaging as is, however; it's a bit muddled. To help it pop, let's make the following changes.
Italicize the names
Start each pair with a bullet point
Since Slack supports basic Markdown in the messages, we can use the _ for italicizing and - for the bullet points. So let's modify the createMessage function to add this formatting.
functioncreateMessage(pairs:Pair<GetOrganizationMemberResponse>[]):string{// Let's wrap each name with the '_' characterconstformatName=(s:string)=>`_${s}_`;constmapper=(p:Pair<GetOrganizationMemberResponse>):string=>// and start each pair with '-'`- ${formatName(p.first.login)} meets with ${formatName(p.second.login)}${p.third?` and ${formatName(p.third.login)}`:""}`;returnpairs.map(mapper).join("\n");}
By making this small change, we now see the following message:
The messaging is better, but we're still missing some clarity. For example, what date is this for? Or what's the purpose of the message? Looking through these docs, it seems like we could add different text blocks (like titles). So let's see what this could look like.
One design approach is to encapsulate the complex logic for dealing with Slack and only expose a "common-sense" API for consumers. In this regard, I think using a Facade pattern would make sense.
We want to expose the ability to set a title and to set a message through one or more lines of text. Here's what that code would look like
// This class allows a user to set a title and lines and then use the// 'build' method to create the payload to interact with SlackclassMessageFacade{// setting some default valuesprivateheader:string;privatelines:string[];constructor(){this.header="";this.lines=[];}// I like making these types of classes fluent// so that it returns itself.publicsetTitle(title:string):MessageFacade{this.header=title;returnthis;}publicaddLineToMessage(line:string|string[]):MessageFacade{if(Array.isArray(line)){this.lines.push(...line);}else{this.lines.push(line);}returnthis;}// Here's where we take the content that the user provided// and convert it to the JSON shape that Slack expectspublicbuild():string{// create the header block if set, otherwise nullconstheaderBlock=this.header?{type:"header",text:{type:"plain_text",text:this.header,emoji:true},}:null;// convert each line to it's own sectionconstlineBlocks=this.lines.map((line)=>({type:"section",text:{type:"mrkdwn",text:line},}));returnJSON.stringify({// take all blocks that have a value and set it hereblocks:[headerBlock,...lineBlocks].filter(Boolean),});}}
With the facade in place, let's look at implementing this in index.ts
// ... code to get the pairs and formatted lines// using the facade with the fluent syntaxconstmessage=newMessageFacade().setTitle(`☕ Random Coffee ☕`).addLineToMessage(formattedPairs).build();awaitsendMessage(message);
When we run the script now, we get the following message:
In this post, we changed our script from posting its Random Coffee message to the console window to instead posting it into a Slack channel using an Incoming Webhook. By making this change, we were able to remove a manual step (e.g., us copying the message into the channel), and we were able to use some cool emojis and better formatting.
In the final post, we'll take this one step further by automating the script using scheduled jobs via GitHub Actions.
As always, you can find a full working version of this bot on my GitHub.
As a leader, I'm always looking for ways to help my team to be more efficient. To me, an efficient team is self-sufficient, able to find the information needed to solve their problems.
I've found that having up-to-date documentation is critical for a team because it scales out knowledge in asynchronously, removing the need for manual knowledge transfers.
For example, my team has a wiki that contains information for onboarding into our space, how to complete certain processes (requesting time off, resetting a password), how to run our Agile activities, and our support guidebook. At any point, if someone on the team doesn't know how to do something, they can consult the wiki and find the necessary information.
I enjoy up-to-date documentation, but the main problem with them is that they captured the state of the world when they were written, but they don't react to changes. If the process for resetting your password changes, the documentation doesn't auto-update. So unless you're spending time reviewing the docs, they'll grow stale and be worthless, or even worse, mislead others to do the wrong things.
A good mental model for documentation is to think of them as a garden. When planted, it looks great, and everyone enjoys the environment. Over time, weeds will grow, and plants will become overgrown, causing the garden to be less attractive. Eventually, people will stop visiting, and the garden will go into disrepair. To prevent this, we must take care of the garden, removing the weeds and trimming the plants.
I started my career in healthcare, and one of my first jobs was writing software for a medical diagnostic device. We were ISO 9001 certified, and the device was considered a Class II from the FDA. Long story short, this meant that we have to provide documentation for our device and software and also show that we were keeping things up to date.
To comply, we would find docs that hadn't been updated in a specific time period (like 90 days) and review them. If everything checked out, we'd bump up the review date. Otherwise, we'd make the necessary changes and revalidate the document.
At the time, all of our files were in Word, so it wasn't the easiest to search them (I recall that we had Outlook reminders, but this was many moons ago).
By baking this into our process, this helped make our work more visible, which in turn, gave us a better idea of the team's capacity for that sprint.
Thankfully, we have better technology than Word for sharing information, so how can we take this approach and bring it up to the modern day?
First, I think that having your docs in source control is a great idea. If you're using tools like Git, you already have a way of leaving comments and keeping track of approvals through pull requests.
To make the most of Git, you should keep your changes in plaintext as it's easy to see the differences. and I enjoy using Markdown and tools like Mkdocs make this workflow possible.
With this figured out, our next step is to know when the file was last reviewed. We can do that by adding a new line to the bottom of each file, Last Reviewed On: YYYY/MM/DD. To come up with the initial date, we could use the last time the file was modified (thanks git log!).
At this point, we have a way to see the last time the file was reviewed, next step is to write a script that can find files that haven't been reviewed in the last 90 days. At a high level, we'd do the following:
Get the latest for the doc repository.
Get all the markdown files for the repository.
Get the last line of the file.
If the line doesn't start with Last Reviewed On:, we flag it for review as it's never been reviewed.
If the line has a date, but it's older than 90 days, we flag it for review as it might be stale.
Print all flagged files to the screen.
With the script created, we could manually run this on Mondays. But we're technical, right? Why not create a scheduled task to execute this script instead? This removes a manual task to be ran and it gives us visibility on what docs need reviewed.
When scaling your knowledge out, having great documentation is necessary as it allows your team to self-serve and work in a more asynchronous manner. The main problem with documentation is that it captures the state of the world when the docs were written, but they don't automatically update when the world changes.
Therefore, we need to have some process to flag and review stale docs. To ensure it gets done, we provide visibility by creating work items and committing to them during the sprint.
constnames=["Batman","Superman","Green Lantern","Wonder Woman","Static Shock",// one of my favorite DC heroes!"The Flash","Aquaman","Martian Manhunter",];constpairs=createPairsFrom(shuffle(names));constmessage=createMessage(pairs);console.log(message);functionshuffle<T>(items:T[]):T[]{constresult=[...items];for(leti=result.length-1;i>0;i--){constj=Math.floor(Math.random()*i);[result[i],result[j]]=[result[j],result[i]];}returnresult;}typePair<T>={first:T;second:T;third?:T};functioncreatePairsFrom<T>(items:T[]):Pair<T>[]{if(items.length<2){return[];}constresults=[];for(leti=0;i<=items.length-2;i+=2){constpair:Pair={first:items[i],second:items[i+1]};results.push(pair);}if(items.length%2===1){results[results.length-1].third=items[items.length-1];}returnresults;}functioncreateMessage(pairs:Pair<string>[]):string{constmapper=(p:Pair<string>)=>`${p.first} meets with ${p.second}${p.third?` and ${p.third}`:""}`;returnpairs.map(mapper).join("\n");}
Even though this approach works, the major problem is that every time there's a member change in the Justice League (which seems to happen more often than not), we have to go back and update the list manually.
It'd be better if we could get this list dynamically instead. Given that the League are great developers, they have their own GitHub organization. Let's work on integrating with GitHub's API to get the list of names.
To get the list of names from GitHub, we'll need to do the following.
First, we need to figure out which GitHub endpoint will give us the members of the League. This, in turn, will also tell us what permissions we need for our API scope.
Now that we have a secret, we need to update our script to read from an .env file.
Once we have the secret being read, we can create a function to retrieve the members of the League.
Miscellaneous refactoring of the main script to handle a function returning complex types instead of strings.
Before we start, we should reactor our current file. It works, but we have a mix of utility functions (shuffle and createPairsFrom) combined with presentation functions (createMessage). Let's go ahead and move shuffle and createPairsFrom to their own module.
import{Pair,createPairsFrom,shuffle}from"./module.ts";constnames=["Batman","Superman","Green Lantern","Wonder Woman","Static Shock",// one of my favorite DC heroes!"The Flash","Aquaman","Martian Manhunter",];constpairs=createPairsFrom(shuffle(names));constmessage=createMessage(pairs);console.log(message);functioncreateMessage(pairs:Pair<string>[]):string{constmapper=(p:Pair<string>)=>`${p.first} meets with ${p.second}${p.third?` and ${p.third}`:""}`;returnpairs.map(mapper).join("\n");}
What's the difference between the two? In GitHub parlance, an Organization is an overarching entity that consists of members which, in turn, can be part of multiple teams.
Using the Justice League as an example, it's an organization that contains Batman, and Batman can be part of the Justice League Founding Team and a member of the Batfamily Team.
Since we want to pair everyone up in the Justice League, we'll use the Get members of an Organization approach.
To interact with the endpoint, we'll need to create an API token for GitHub. Looking over the docs, our token needs to have the read:org scope. We can create this token by following the instructions here about creating a Personal Auth Token (PAT).
Once we have the token, we can invoke the endpoint with cURL or Postman to verify that we can communicate with the endpoint correctly.
After we've verified, we'll need a way to get this API token into our script. Given that this is sensitive information, we absolutely should NOT check this into the source code.
Our problem now is that if we check git status, we'll see this file listed as a change. We don't want to check this in, so let's add a .gitignore file.
import{configasloadEnv}from"https://deno.land/x/dotenv@v3.2.2/mod.ts";// other imports// This loads the .env file and adds them to the environment variable listawaitloadEnv({export:true});// Deno.env.get("name") retrieves the value from an environment variable named "name"console.log(Deno.env.get("GITHUB_BEARER_TOKEN"));// remaining code
When we run the script now with deno run, we get an interesting prompt:
Deno requests read access to ".env".
- Requested by `Deno.readFileSync()` API.
- Run again with --allow-read to bypass this prompt
- Allow?
This is one of the coolest parts about Deno; it has a security system that prevents scripts from doing something that you hadn't intended through its Permissions framework.
The permissions can be tuned (e.g., you're only allowed to read from the .env file), or you can give blanket permissions. In our cases, two resources are being used (the ability to read the .env file and the ability to read the GITHUB_BEARER_TOKEN environment variable).
Let's run the command with the allow-read and allow-env flags.
deno run --allow-run --allow-env ./index.ts
If the bearer token gets printed, we've got the .env file created correctly and can proceed to the next step.
// Brining in the axiod libraryimportaxiodfrom"https://deno.land/x/axiod@0.26.2/mod.ts";asyncfunctiongetMembersOfOrganization(orgName:string):Promise<any[]>{consturl=`https://api.github.com/orgs/${orgName}/members`;// Necessary headers are found on the API docsconstheaders={Accept:"application/vnd.github+json",Authorization:`Bearer ${Deno.env.get("GITHUB_BEARER_TOKEN")}`,"X-GitHub-Api-Version":"2022-11-28",};try{constresp=awaitaxiod.get<any[]>(url,{headers:headers,});returnresp.data;}catch(error){// Response was received, but non-2xx status codeif(error.response){returnPromise.reject(`Failed to get members: ${error.response.status}, ${error.response.statusText}`);}else{// Response wasn't receivedreturnPromise.reject("Failed for non status reason "+JSON.stringify(error));}}}
To prove this is working, we can call this function in the index.ts file and verify that we're getting a response.
import{configasloadEnv}from"https://deno.land/x/dotenv@v3.2.2/mod.ts";import{getMembersOfOrganization}from"./github.ts";import{Pair,createPairsFrom,shuffle}from"./utility.ts";awaitloadEnv({export:true});constmembersOfOrganization=awaitgetMembersOfOrganization("JusticeLeague");console.log(JSON.stringify(membersOfOrganization));// rest of the file
At this point, we're making the call, but we're using a pesky any for the response, which works, but it doesn't help us with what properties we have to work with.
Looking at the response schema, it seems the main field we need is login. So let's go ahead and create a type that includes that field.
typeGetOrganizationMemberResponse={login:string;};asyncfunctiongetMembersOfOrganization(orgName:string):Promise<GetOrganizationMemberResponse[]>{//codeconstresp=awaitaxiod.get<GetOrganizationMemberResponse[]>(url,{headers:headers,});// rest of the code}
We can rerun our code and verify that everything is still working, but now with better type safety.
// Need to update the input to be Pair<GetOrganizationMemberResponse>functioncreateMessage(pairs:Pair<GetOrganizationMemberResponse>[]):string{// Need to update mapper function to get the login propertyconstmapper=(p:Pair<string>):string=>`${p.first.login} meets with ${p.second.login}${p.third?` and ${p.third.login}`:""}`;returnpairs.map(mapper).join("\n");}
With this last change, we run the script and verify that we're getting the correct output, huzzah!
Congratulations! We now have a script that is dynamically pulling in heroes from the Justice League organization instead of always needing to see if Green Lantern is somewhere else or if another member of Flash's SpeedForce is here for the moment.
A working version of the code can be found on GitHub.
Welcome to Cameron's Coaching Corner, where we answer questions from readers about leadership, career, and software engineering.
In this week's post, we look at how Alan can help their engineer figure out what they want to be when they grow up.
Hey Cameron!
I have a front-end engineer who's sharp, but they're not sure what their career growth looks like. I get the sense that they're interested in other roles outside of software development. How do you navigate this and help them grow?
In a previous post, I mention my strategy of building relationships through one-on-ones. One approach in the post was leveraging a Slack plugin, Random Coffee, to automate scheduling these impromptu conversations.
I wanted to leverage the same idea at my current company; however, we don't use Slack, so I can't just use that bot.
Thinking more about it, the system wouldn't be too complicated as it has three moving parts:
Get list of people
Create random pairs
Post message
To make it even easier, I could hardcode the list of people, and instead of posting the message to our message application, I could print it to the screen.
With these two choices made, I would need to build something that can shuffle a list and create pairs.
Even though we're hardcoding the list of names and printing a message to the screen, I know that the future state is to get the list of names dynamically, most likely through an API call. In addition, most messaging systems support using webhooks to create a message, so that would be the future state as well.
With these restrictions in mind, I know I need to use a language that is good at making HTTP calls. I also want this automation to be something that other people outside of me can maintain, so if I can use a language that we typically use, that makes this more approachable.
In my case, TypeScript fit the bill as we heavily use it in my space, the docs are solid, and it's straightforward to make HTTP calls. I'm also a fan of functional programming, which TypeScript supports nicely.
My major hurdle at this point is that I'd like to execute this single file of TypeScript, and the only way I knew how to do that was by spinning up a Node application and using something like ts-node to execute the file.
Talking to a colleague, they recommended I check out Deno as a possible solution. The more I learned about it, the more I thought this would fit perfectly. It supports TypeScript out of the box (no configuration needed), and a file can be ran with deno run, no other tools needed.
This project is simple enough that if Deno wasn't a good fit, I could always go back to Node.
With this figured out, we're going to create a Deno application using TypeScript as our language of choice.
Once Deno has been installed and configured, you can run the following script and verify everything is working correctly. It creates a new directory called deno-coffee, writes a new file and executes it via deno.
constnames=["Batman","Superman","Green Lantern","Wonder Woman","Static Shock",// one of my favorite DC heroes!"The Flash","Aquaman","Martian Manhunter",];constpairs=createPairsFrom(shuffle(names));constmessage=createMessage(pairs);console.log(message);
This code won't compile as we haven't defined what shuffle, createPairsFrom, or createMessage does, but we can tackle these one at a time.
Since we don't want the same people meeting up every time, we need a way to shuffle the list of names. We could import a library to do this, but what's the fun in that?
In this case, we're going to implement the Fisher-Yates Shuffle (sounds like a dance move).
functionshuffle(items:string[]):string[]{// create a copy so we don't mutate the originalconstresult=[...items];for(leti=result.length-1;i>0;i--){// create an integer between 0 and iconstj=Math.floor(Math.random()*i);// short-hand for swapping two elements around[result[i],result[j]]=[result[j],result[i]];}returnresult;}constwords=["apples","bananas","cantaloupes"];console.log(shuffle(words));// [ "bananas", "cantaloupes", "apples" ]
Excellent, we have a way to shuffle. One refactor we can make is to have shuffle be generic as we don't care what array element types are, as long as we have an array.
But what happens if Martian Manhunter is called away and isn't available? That would leave Aquaman without a pair to have coffee with (sad trombone noise).
In the case that we have an odd number of heroes, the last pair should instead be a triple which would look like the following:
Given that we've been using the word Pair to represent this grouping, we have a domain term we can use. This also means that createPairsFrom has the following type signature.
// Using an optional propertytypePair{first:string,second:string,third?:string}// Using Discriminated UnionstypePair={kind:'double',first:string,second:string}|{kind:'triple',first:string,second:string;third:string}
For now, I'm thinking of going with the optional property and if we need to tweak it later, we can.
functioncreatePairsFrom(names:string[]):Pair[]{// if we don't have at least two names, then there are no pairsif(names.length<2){return[];}constresults=[];for(leti=0;i<=names.length-2;i+=2){constpair:Pair={first:names[i],second:names[i+1]};results.push(pair);}if(names.length%2===1){// we have an odd length// Assign the left-over name to the third of the tripleresults[results.length-1].third=names[names.length-1];}returnresults;}// Example executionconsole.log(createPairsFrom(["apples","bananas","cantaloupes","dates"]));// [{first:"apples", second:"bananas"}, {first:"cantaloupes", second:"dates"}]console.log(createPairsFrom(["ants","birds","cats"]));// [{first:"ants", second:"birds", third:"cats"}]
Similarly to shuffle, we can make this function generic as it doesn't matter what the array element types are, as long as we have an array to work with.
functioncreateMessage(pairs:Pair<string>[]):string{constmapper=(p:Pair<string>)=>`${p.first} meets with ${p.second}${p.third?` and ${p.third}`:""}`;pairs.map(mapper);}
From here, we can join the strings together using the \n (newline) character.
functioncreateMessage(pairs:Pair<string>[]):string{constmapper=(p:Pair<string>)=>`${p.first} meets with ${p.second}${p.third?` and ${p.third}`:""}`;returnpairs.map(mapper).join("\n");}
denoruncoffee.ts
"Superman meets with Wonder WomanBatman meets with The FlashMartian Manhunter meets with AquamanStatic Shock meets with Green Lantern"
From here, we have a working proof of concept of our idea. We could run this manually on Mondays and then post this to our messaging channel (though you might want to switch the names out). If you wanted to be super fancy, you could have this scheduled as a cron job or through Windows Task Scheduler.
The main takeaway is that we've built something we didn't have before and can continue to refine and improve. If no one likes the idea, guess what? We only had a little time invested. If it takes off, then that's great; we can spend more time making it better.
In this post, we built the first version of our Random Coffee script using TypeScript and Deno. We focused on getting our tooling working and building out the business rules for shuffling and creating the pairs.
In the next post, we'll look at making this script smarter by having it retrieve a list of names dynamically from GitHub's API!
As always, you can find a full working version of this bot on my GitHub.
Welcome to Cameron's Coaching Corner, where we answer questions from readers about leadership, career, and software engineering.
In this week's post, we look at how Chase can balance writing the perfect code and shipping something.
My question: As a young developer, I notice that sometimes I get paralyzed by options. I want to write the perfect piece of code. This helps me in writing good code but usually at the cost of efficiency. Especially when I am faced with multiple good options. Sometimes I want to KNOW I’m gonna write the right thing before I’m writing it when I my be better off with some trial and error
Are these common problems that you see people face?
What rules of thumb or other pieces of advice do you have to avoid writing nothing instead of something as a result of seeking the ideal?
How important is planning vs trial and error ("failing fast" as they say) to good software development flow?