In a previous post, we cover on how using union types in TypeScript is a great approach for domain modeling because it limits the possible values that a type can have.
For example, let's say that we're modeling a card game with a standard deck of playing cards. We could model the domain as such.
This code works, however, I don't like the fact that I had to formally list the option for both Rank and Suite as this means that I have two different representtions for Rank and Suite, which implies tthat if we needed to add a new Rank or Suite, then we'd need to add it in two places (a violation of DRY).
Doing some digging, I found this StackOverflow post that gave a different way of defining our Rank and Suite types. Let's try that new definition.
In this above code, we're saying that ranks cannot change (either by assignment or by operations like push). With that definition, we can say that Rank is some entry in the ranks array. Similar approach for our suites array and Suite type.
I prefer this approach much more because we have our ranks and suites defined in one place and our code reads cleaner as this says Here are the possible ranks and Rank can only be one of those choices.
The main limitation is that it only works for "enum" style unions. Let's change example and say that we want to model a series of shapes with the following.
To use the same trick, we would need to have an array of constant values. However, we can't have a constant value for any of the Shapes because there are an infinite number of valid Circles, Squares, and Rectangles.
When thinking about software, it's natural to think about the things that it can do (its features like generating reports or adding an item to a cart).
But what about the properties that those actions have? Those things that are always true?
In this post, let's take a look at a fundamental tool of functional programming, the map function.
All the code examples in this post will be using TypeScript, but the lessons hold for other languages with Map (or Select if you're coming from .NET).
Map is a cool function because it has a lot of properties that we get for free.
Maintains Length - If you call map on an array of 3 elements, then you'll get a new array with 3 elements. If you call map on an empty array, you'll get an empty array.
Maintains Type - If you call map on array of type T with a function that goes from T to U, then every element in the new array is of type U.
Maintains Order - If you call map on array with one function, then call map with a function that "undoes" the original map, then you end up with the original array.
To prove these properties, we can write a set of unit tests. However, it would be hard to write a single test that that covers a single property.
Most tests are example based in the sense that for a specific input, we get a specific output. Property based tests, on the other hand, uses random data and ensures that a property holds for all inputs. If it finds an input where the property fails, the test fails and you know which input caused the issue.
Most languages have a tool for writing property-based tests, so we'll be using fast-check for writing property based tests and jest for our test runner
importfcfrom"fast-check";describe("map",()=>{it("maintains length",()=>{// This is known as the identify function// as it returns whatever input it receivedconstidentity=<T>(a:T):T=>a;// Fast Check assert that the following holds for all arrays of integersfc.assert(// data is the array of numbersfc.property(fc.array(fc.integer()),(data):void=>{// We call the map function with the identify functionconstresult=data.map(identity);// We make sure that our result has the same lengthexpect(result).toHaveLength(data.length);}));});
If we run this test, we'll end up passing. But what is the value of data?
By adding a console.log in the test, we'll see the following values printed when we run the test (there are quite a few, so we'll examine the first few).
// An additional test in the describe blockit("maintains type",()=>{constgetLength=(s:string)=>s.length;fc.assert(// asserting with an array of stringsfc.property(fc.array(fc.string()),(data):void=>{// mapping to lengths of stringsconstresult=data.map(getLength);// checking that all values are numbersconstisAllValid=result.every((x)=>typeofx==="number");expect(isAllValid).toBeTruthy();}));});
Like before, we can add a console.log to the test to see what strings are being generated
console.log
['ptpJTR`G4','s >xmpXI','H++%;a3Y','OFD|+X8','gp']console.log
['Rq','','V&+)Zy2VD8']console.log
['o%}','$o','w7C','O+!e','NS$:4\\9aq','xPbb}=F7h','z']console.log
['']console.log
['apply','']console.log
[]## And many more entries...
For our third property, we need to ensure that the order of the array is being maintained.
To make this happen, we can use our identity function from before and check that our result is the same as the input. If so, then we know that the order is being maintained.
When I think about the code I write, I'm thinking about the way it works, the way it should work, and the ways it shouldn't work. I find example based tests to help understand a business flow because of it's concrete values while property based tests help me understand the general guarantees of the code.
I find that once I start thinking in properties, my code became cleaner because there's logic that I no longer had to write. In our map example, we don't have to write checks for if we have null or undefined because mapalways returns an array (empty in the worse case). There's also no need to write error handling because as long as the mapping function is pure, map will always return an array.
For those looking to learn more about functional programming, you'll find that properties help describe the higher level constructs (functors, monoids, and monads) and what to look for.
Finding properties can be a challenge, however, Scott Wlaschin (of FSharpForFunAndProfit) has a great post talking about design patterns that I've found to be immensely helpful.
It's a good question and it can help set the tone for the project. Assuming the work is more than a bug fix, it's natural to look at a big project and break it down to smaller, more approachable pieces.
Depending on how you break down the work, you can dramatically change the timeline from when you can get feedback from your users and find issues much sooner.
In this post, let's look at a team breaking down a new feature for their popular application, TakeItEasy.
It's a new sprint and your team is tackling a highly requested feature for TakeItEasy, the ability to setup a User Profile. Everyone is clear on the business requirements as we need the ability to save and retrieve the following information so that we can personalize the application experience for the logged in user:
Display Name
Name
Email Address
Profile Picture
Going over the high level design with the engineers, it's discovered that there's not a way to save this data right now. In addition, we don't have a way to display this data for the user to enter or modify.
Working with the team, the feature gets broken down as the following stories:
Create the data storage
Create the data access layer
Create the User Profile screen
Once these stories are done, this feature is done and that seems easy enough. As you talk with the team though, a few things stand-out to you.
None of these stories are fully independent of each other. You can build out the User Profile screen, but without the Data Access Layer, it's incomplete. Same thing with the data access layer, it can't be fully complete until the data storage story is done.
It's difficult to demo the majority of the stories. Stakeholders don't care about data storage or the data access layer, but they do care about the user setting up their profile. With the current approach, it's not possible to demo any work until all three stories are done.
As you approach each story, they seem to be quite large:
For the Data Storage work, it's an upgrade script to modify the Users table with nullable columns.
For the data access story, it's updating logic to retrieve each of the additional fields and making sure to handle missing values from the database.
For the User Profile screen, it's creating a new page, update the routing, and adding multiple controls with validation logic for each of the new fields.
Is there a different way we can approach this work such that we can deliver something useful sooner?
The main issue with the above approach is that there's a story for each layer of the application (data, business rules, and user interface) and each of these layers are dependent upon each other. However, no one cares about any single layer, they care about all of it working together.
Seriously, could you imagine enjoying a plate of nachos by first eating all the chips, then the beans, then the salsa?
Photo by Herson Rodriguez on Unsplash
One way to solve this problem would be to have a single story Implement User Profile that has all this work, but that sounds like more than a sprints worth of work. We know that that the more work in a story, the harder it is to give a fair estimate for what's needed.
Another approach to solve this problem is by changing the way we slice the work by taking a bit of each layer into a story. This means that we'll have a little bit of database, little bit of data access, and little bit of the user interface.
If we take this approach, we would have the following stories for our User Profile feature.
Feature: Implement User Profile
Story: Implement Display Name
Story: Implement Name
Story: Implement Email
Story: Implement Profile Picture
Each story would have the following tasks:
Add storage for field
Update data access to store/retrieve field
Update interface with control and validation logic
There are quite a few advantages with this approach.
First, instead of waiting for all the stories to get done before you can demo any functionality, you can demo after getting one story completed. This is huge advantage because if things are looking well, you could could potentially go live with one story instead of waiting for all three stories from before.
Second, these stories are independent of each other as the work to Implement Display Name doesn't depend on anything from Implement Email. This increases the autonomy of the team and allows us to shift priorities easier as at the end of any one story, we can pick any of the remaining stories.
For example, let's say that after talking more with customers, we need a way for them to add their favorite dessert. Instead of the business bringing in the new requirement and pushing back the timeline, engineering can work on that functionality next and get that shipped sooner.
Third, it's much easier to explain to engineers and stakeholders for when a certain piece of functionality will be available. Going back to horizontal layering, it's not clear when a user would be able to set-up their email address. Now, it's clear when that work is coming up.
I'm going to let you on a little secret. Most engineers are technically strong, but can be ignorant of the business domain that they're solving in. Unless you're taking time to coach them on the business (or if they've been working in the domain for a long period of time), engineers just don't know the business.
As such, it's difficult for engineers to speak in the ubiquitous language of the business, it's much easier to speak in the technical details. This, in turn, leads to user stories that are more technical details in nature (modify table, build service, update pipeline) instead of user focused (can set display name, can set email address).
If you're an Engineer, you need to learn the business domain that you're working in. This will help you prevent problems from happening in your software because it literally can't do that. In addition, this will help you see the bigger picture and build empathy with your users as you understand them better.
If you're in Product or Business, you need to work with your team to level up their business domain. This can be done by having them use the product like a user, giving them example tasks, and spending time to talk about the domain. If you can get the engineers to be hands-on, every hour you invest here is going to pay huge dividends when it comes time to pick up the next feature.
The next time you and the team have a feature, try experimenting with vertically slicing your stories and see how that changes the dynamics of the team.
To get started, remember, focus on the user outcomes and make sure that each story can stand independently of one another.
I've recently been spending some time learning about Svelte and have been going through the tutorials.
When I made it to the event modifiers section, I saw that there's a modifier for capture where it mentions firing the handler during the capture phase instead of the bubbling phase.
I'm not an expert on front-end development, but I'm not familiar with either of these concepts. Thankfully, the Svelte docs refer out to MDN for a better explanation of the two.
Long story short, by default, when an event happens, the element that's been interacted with will fire first and then each parent will receive the event.
So if we have the following HTML structure where there's a body that has a div that has a button
// Setting up adding string to the pre elementconstoutput=document.querySelector("#output");consthandleClick=(e)=>output.textContent+=`You clicked on a ${e.currentTarget.tagName} element\n`;constcontainer=document.querySelector("#container");constbutton=document.querySelector("button");// Wiring up the event listenersdocument.body.addEventListener("click",handleClick);container.addEventListener("click",handleClick);button.addEventListener("click",handleClick);
And we click the button, our <pre> element will have:
Event Capturing is the opposite of Event Bubbling where the root parent receives the event and then each inner parent will receive the event, finally making it to the innermost child of the element that started the event.
Let's see what happens with our example when we use the capture approach.
// Wiring up the event listenersdocument.body.addEventListener("click",handleClick,{capture:true});container.addEventListener("click",handleClick,{capture:true});button.addEventListener("click",handleClick,{capture:true});
After clicking the button, we'll see the following messages:
By default, events will work in a bubbling fashion and this intuitively makes sense to me since the element that was interacted with is most likely the right element to handle the event.
One case that comes to mind is if you finding yourself attaching the same event listener to every child element. Instead, we could move that up.
For example, let's say that we had the following layout
<div><ulstyle="list-style-type: none; padding: 0px; margin:0px; float:left"><li><aid="one">Click on 1</a></li><li><aid="two">Click on 2</a></li><li><aid="three">Click on 3</a></li></ul></div>
With this layout, let's say that we need to do some business rules for when any of those buttons are clicked. If we used the bubble down approach, we would have the following code:
// Stand-in for real business rulesfunctionhandleClick(e){console.log(`You clicked on ${e.target.id}`);}// Get all the a elementsconstelements=document.querySelectorAll("a");// Wire up the click handlerfor(consteofelements){e.addEventListener("click",handleClick);}
This isn't a big deal with three elements, but let's pretend that you had a list with tens of items, or a hundred items. You may run into a performance hit because of the overhead of wiring up that many event listeners.
Instead, we can use one event listener, bound to the common parent. This can accomplish the same affect, without as much complexity.
Let's revisit our JavaScript and make the necessary changes.
// Stand-in for real business rulesfunctionhandleClick(e){// NEW: To handle the space of the unordered list, we'll return early// if the currentTarget is the same as the original targetif(e.currentTarget===e.target){return;}console.log(`You clicked on ${e.target.id}`);}// NEW: Getting the common parentconstparent=document.querySelector("ul");// NEW setting the eventListener to be capture basedparent.addEventListener("click",handleClick,{capture:true});
With this change, we're now only wiring up a single listener instead of having multiple wirings.
In this post, we looked at two different event propagation models, bubble and capture, the differences between the two and when you might want to use capture.
As a leader it's inevitable that you will have to organize a meeting. Whether it's for updates, 1-1s, or making decisions, the team is looking towards you to lead the conversation and have it be a good use of time.
But how do you have a good meeting? That's not something that's covered in leadership training. Is it the perfect invite? A well honed pitch? Throw something out there and see if it sticks?
Like anything else, a good meeting needs some preparation, however, if you follow these five tips, I guarantee your meetings will be better than before.
The best kind of meeting is the one that didn't have to happen. Have you ever sat through a meeting where everyone did a bunch of talking, you halfway listened and thought to yourself, "this could have been an email?"
When I think about why we need meetings, it's because we're trying to accomplish something that one person alone couldn't get done. With this assumption in mind, I find that meetings take one of two shapes: sharing information (e.g., stand-ups, retrospectives, all-hands) or to make a decision (e.g., technical approach, ironing the business rules).
Depending on what you're trying to accomplish, then next thought is determine if the communication needs to be synchronous (get everyone together) or asynchronous (let people get involved at their own pace).
For example, if the team has been struggling in getting work done, then it makes sense to have a meeting to figure out what's happening and ensure that everyone is hearing the exact wording/tone of the messaging.
On the other hand, if your intent is to let the team know that Friday is a holiday, then that can be done through email or message in your chat tool.
One way you can figure out if the meeting could have been an email is to pretend it was a meeting and you canceled it. Is there anything that can't proceed? If not, then maybe you don't need that meeting.
Have you ever attended a meeting and didn't know what it was about or why you met? These types of meetings typically suffer from not having a goal or purpose behind the meeting.
Recall from Step #1, we're meeting because there's something that we need from the group that we couldn't do as individuals. So what is it?
When scheduling the meeting, include the purpose (here's why we're meeting) and the goal (here's how we know if we're done) to the description. Not only is this a great way to focus the meeting, it can also serve as a way for people to know if they need to attend or not.
This is also a good litmus test to see if you know why there should be a meeting as this forces you to think about the problem being solved and how it should happen. If you're struggling to determine the purpose and the goal, then you're attendees will also struggle.
A common mistake I see people make is that they invite everyone who has a stake or passing interest in the topic which can make for a large (10+ people) meeting.
Even though the intent is good (give everyone visibility), this is a waste because the more people you have in a meeting, the less effective it will be. A meeting with four people will have a better conversation and get things done more than a meeting with nine people.
Let's pretend that you're at a large party and you see a group that you know, so you walk up to the group, hoping to break into the conversation.
As more people join in the group, they're going to naturally split up into smaller groups, each with their own conversations. The main reason is that the large the group, the less likely you have a chance to participate and get involved. So you might start a conversation with 1 or 2, split off and then start a new group.
Meetings have the same problem. The large the group, the more likely that side conversations will happen and it makes it harder for you to facilitate and keep everyone on track.
To keep meetings effective, be sure to only include the necessary people. For example, instead of inviting an entire team, invite only 1 or 2 people.
At a high level, you need the these three roles filled to have a successful meeting
The Shot Caller - This is the main stakeholder and can approve our decisions. Without their buy-in, no real decision can be made.
The Brain Trust - These are the people who have the details and can drive the conversation. You want to keep this group as tightly focused as possible.
The Facilitator - Generally the organizer, this is the person who ensures that the goal is achieved and keeps the meeting running.
One way to narrow down the invite list is to answer this question:
If this person can't make the meeting, then we can't meet.
If you can't accomplish the goal without them, then they need to be there. I'm such a believer in this advice that if it's the day of the meeting and we don't have the Shot Caller or the Brain Trust, then I'll reschedule the meting as I'd rather move it than waste everyone's time.
It's the big day and you've got everyone in the room, now what?
In Step #2, we talked about having a purpose and goal for the meeting. Now is when we vocalize these two things to kick the meeting off. From there, we can seed the conversation with one of these strategies:
Asking an opening question to prime the Brain Trust.
Throwing to the Shot Caller to frame any restrictions the attendees need to be aware of.
Start with a specific person to kick the conversation off.
Once the conversation starts flowing, your job is to keep the meeting on track. For those who've played games like Dungeons and Dragons, you're acting like a Game Master where you know the direction the meeting needs to go to (The Goal), but the attendees are responsible for getting there.
It can be challenging to keep the meeting on track if you're also driving the conversation, so pace yourself, take notes, and get others involved to keep the conversation going.
When leading longer meetings (more than 60 minutes), make sure to take a 10 minute break.
For attendees, this allows them to stretch their legs, take a bathroom break, and to stew on the conversation that's happened so far. For those who are "thinkers" than "reacters", this gives them time to compose their thoughts and have better conversations after the break.
As a facilitator, this gives you a way to think about the meeting so far, identify areas that the group needs to dig into, and if needed, it can break the conversation out of a rut.
As the meeting comes to a close, we need to make sure that action follows next. A meeting with no follow-up is a lot like a rocking chair. Plenty of motion, but no progress being made.
In order to make sure that next steps happen, make sure to define action items with attendees owning getting them done. Action items don't have to be complex, it could be as simple as:
Defining stories for the team
Sending summary notes to other stakeholder
Following up with Person about X.
When defining action items, be wary of items that are scheduling another meeting (e.g. let's schedule a meeting with Team Y to get their perspective). This implies that you didn't have the right people in the room (see Step 3). Also, remember, meetings are to get alignment or to come up with a solution, so what purpose does this follow up meeting have?
As the meeting wraps up, take a few moments to summarize the outcome, verbally ensure that actions items have been assigned and thank everyone for their attention and time.
Running effective meetings can be made easier if you take the time to do the necessary preparation. Even those these steps may seem heavy on the documentation, you'll find that it'll help you focus on the core problem at hand, which helps focus the group, which makes everyone that much better.
By following these five steps, you'll increase your chances of having a great meeting and as you gain more experience, you'll become more comfortable running them.
In a previous post, I argued that to help your team be effective, you need to have up-to-date docs, and to have this happen, you need some way of flagging stale documentation.
In this series, I show you how you can automate this process by creating a seed script, a check script, and then automating the check script. In today's post, let's develop the check script.
Our script is going to search over our repository, however, I don't want our script to be responsible for cloning and cleaning up those files. Since the long term plan is for our script to run through GitHub Actions, we can have the pipeline be responsible for cloning the repo.
This means that our script will have to be told where to search and since it can't take in manual input, we're going to use an environment variable to tell the script where to search.
First, let's create a .env file that will store the path of the repository:
import{load}from"https://deno.land/std@0.195.0/dotenv/mod.ts";awaitload({export:true});// this loads the env file into our environmentconstdirectory=Deno.env.get("REPO_DIRECTORY");if(!directory){console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");Deno.exit();}console.log(directory);
If we were to run our Deno script with the following command deno run --allow-read --allow-env ./index.ts, we should see the environment variable getting logged.
exportfunctiongetMarkdownFilesFromDirectory(directory:string):string[]{// let's get all the files from the directoryconstallEntries:Deno.DirEntry[]=Array.from(Deno.readDirSync(directory));// Get all the markdown files in the current directoryconstmarkdownFiles=allEntries.filter((x)=>x.isFile&&x.name.endsWith(".md"));// Find all the folders in the directoryconstfolders=allEntries.filter((x)=>x.isDirectory&&!x.name.startsWith("."));// Recurse into each folder and get their markdown filesconstsubFiles=folders.flatMap((x)=>getMarkdownFilesFromDirectory(`${directory}/${x.name}`));// Return the markdown files in the current directory and the markdown files in the children directoriesreturnmarkdownFiles.map((x)=>`${directory}/${x.name}`).concat(subFiles);}
With this function in place, we can update our index.ts script to be the following:
import{load}from"https://deno.land/std@0.195.0/dotenv/mod.ts";import{getMarkdownFilesFromDirectory}from"./utility.ts";constdirectory=Deno.env.get("REPO_DIRECTORY");if(!directory){console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");Deno.exit();}constfiles=getMarkdownFilesFromDirectory(directory);console.log(files);
Running the script with deno run --allow-read --allow-env ./index.ts, should get a list of all the markdown files being printed to the screen.
Now that we have each file, we need a way to get their last line of text.
Using Deno.readTextFile/Sync, we can get the file contents. From there, we can convert them to lines and then find the latest occurrence of Last Reviewed
Let's add a new function, getLastReviewedLine to the utility.ts file.
exportfunctiongetLastReviewedLine(fullPath:string):string{// Get the contents of the file, removing extra whitespace and blank linesconstfileContent=Deno.readTextFileSync(fullPath).trim();// Convert block of text to a array of stringsconstlines=fileContent.split("\n");// Find the last line that starts with Last Reviewed OnconstlastReviewed=lines.findLast((x)=>x.startsWith("Last Reviewed On"));// If we found it, return the line, otherwise, return an empty stringreturnlastReviewed??"";}
Let's try this function out by modifying our index.ts file to display files that don't have a Last Reviewed On line.
import{load}from"https://deno.land/std@0.195.0/dotenv/mod.ts";import{getMarkdownFilesFromDirectory,getLastReviewedLine,}from"./utility.ts";constdirectory=Deno.env.get("REPO_DIRECTORY");if(!directory){console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");Deno.exit();}constfiles=getMarkdownFilesFromDirectory(directory);files.filter((x)=>getLastReviewedLine(x)!=="").forEach((s)=>console.log(s));// print them to the screen
At this point, we can get the "Last Reviewed On" line from a file, but we've got some more business rules to implement.
If there's a Last Reviewed On line, but there's no date, then the files needs to be reviewed
If there's a Last Reviewed On line, but the date is invalid, then the file needs to be reviewed
If there's a Last Reviewed On line, and the date is more than 90 days old, then the file needs to be reviewed.
Otherwise, the file doesn't need review.
We know from our filter logic that we're only going to be looking at lines that start with "Last Reviewed On", so now we need to extract the date.
Since we assume our format is Last Reviewed On, we can use substring to get the rest of the line. We're also going to assume that the date will be in YYYY/MM/DD format.
exportfunctiondoesFileNeedReview(line:string):boolean{if(!line.startsWith("Last Reviewed On: ")){returntrue;}constdate=line.replace("Last Reviewed On: ","").trim();constparsedDate=newDate(Date.parse(date));if(!parsedDate){returntrue;}// We could something like DayJS, but trying to keep libraries to a minimum, we can do the followingconstcutOffDate=newDate(newDate().setDate(newDate().getDate()-90));returnparsedDate<cutOffDate;}
Let's update our index.ts file to use the new function.
import{load}from"https://deno.land/std@0.195.0/dotenv/mod.ts";import{getMarkdownFilesFromDirectory,getLastReviewedLine,}from"./utility.ts";constdirectory=Deno.env.get("REPO_DIRECTORY");if(!directory){console.log("Couldn't retrieve the REPO_DIRECTORY value from environment.");Deno.exit();}getMarkdownFilesFromDirectory(directory).filter((x)=>getLastReviewedLine(x)!=="").filter((x)=>doesFileNeedReview(x)).forEach((s)=>console.log(s));// print them to the screen
And just like that, we're able to print stale docs to the screen. At this point, you could create a scheduled batch job and start using this script.
However, if you wanted to share this with others (and have this run not on your box), then stay tuned for the final post in this series where we put this into a GitHub Action and post a message to Slack!
In a previous article, I argued that to help your team be effective, you need to have up-to-date docs, and to have this happen, you need some way of flagging stale documentation.
This process lends itself to being easily automated, so in this series of posts, we'll build out the necessary scripts to check for docs that haven't been reviewed in the last 90 days.
All code used in this post can be found on my GitHub.
For this script to work, we need to be able to do two things:
Determine the last commit date for a file.
Add text to the end of the file.
Getting a list of files in a directory.
To determine the last commit date for a file, we can leverage git and its log command (more on this in a moment). Since we're mainly doing file manipulation, we could use Deno here, but it makes much more sense to me to use something like bash or PowerShell.
To make this automation work, we need to have a date for the Last Reviewed On footer. You don't want to set all the files to the same date because all the files will come up for review in one big batch.
So, you're going to want to stagger the dates. You can do this by generating random dates, but honestly, getting the last commit date should be "good" enough.
file=YourFileHere.md
commitDate=$(gitlog-n1--pretty=format:%aI--$file)# formatting date to YYYY/MM/DDformattedDate=$(date-d"$commitDate""+%Y/%m/%d")echo$formattedDate
Assuming the file has been checked into Git, we should get the date back in a YYYY/MM/DD format. Success!
Now that we have a way to get the date, we need to add some text to the end of the file. Since we're working in markdown, we can use --- to denote a footer and then place our text.
Since we're going to be appending multiple lines of text, we can use the cat command with here-docs.
file=YourFileHere.md
# Note the blank lines, this is to make sure that the footer is separated from the text in the file# Note: The closing EOF has to be on its own line with no whitespace or other characters in front of it.cat<< EOF >> $file---Last Reviewed On: 2023/08/12EOF
After running this script, we'll see that the file has appended blank lines and our new footer.
file=YourFileHere.md
commitDate=$(gitlog-n1--pretty=format:%aI--$file)# formatting date to YYYY/MM/DDformattedDate=$(date-d"$commitDate""+%Y/%m/%d")# Note the blank lines, this is to make sure that the footer is separated from the text in the file# Note: The closing EOF has to be on its own line with no whitespace or other characters in front of it.cat<< EOF >> $file---Last Reviewed On: $formattedDateEOF
Nice! Given a file, we can figure out its last commit date and append it to the file. Let's make this more powerful by not having to hardcode a file name.
At this point, we can update a file, but the file is hardcoded. But we're going to have a lot of docs to review, and we don't want to do this manually, so let's figure out how we can get all the markdown files in a directory.
For this exercise, we can use the find command. In our case, we need to find all the files with a .md extension, no matter what directory they're in.
We're going to need to process each of these files, so some type of iteration would be helpful. Doing some digging, Bash supports a for loop, so let's use that.
directory=DirectoryPathGoesHere
forfilein`find$directory-name"*.md"-typef`;docommitDate=$(gitlog-n1--pretty=format:%aI--$file)# formatting date to YYYY/MM/DDformattedDate=$(date-d"$commitDate""+%Y/%m/%d")# Note the blank lines, this is to make sure that the footer is separated from the text in the file# Note: The closing EOF has to be on its own line with no whitespace or other characters in front of it.cat<< EOF >> $file---Last Reviewed On: $formattedDateEOFdone
This script works and we could ship this, however, it's a bit rough.
For example, the script assumes that it's in the same directory as your git repository. It also assumes that your repository is up-to-date and that it's safe to make changes on the current branch.
Let's make our script a bit more durable by making the following tweaks:
# see https://unix.stackexchange.com/questions/30091/fix-or-alternative-for-mktemp-in-os-x#answer-84980# for why tmpDir is being created this waydocRepo="RepoUrlGoesHere"tmpDir=$(mktemp-d2>/dev/null||mktemp-d-t'docSeed')cd$tmpDirecho"Cloning from $docRepo"# Note the . here, this allows us to clone to the temp folder and not to a new folder of repo namegitclone"$docRepo".&>/dev/null
# ... code to clone repositorygitswitch-c'adding-seed-dates'# ... code to make file changesgitadd--all
gitcommit-m"Adding seed dates"gitpush-uoriginadding-seed-dates
In this post, we wrote a bash script to clone our docs and add a new footer to every page with the file's last commit date. In the next post, we'll build the script that checks for stale files.
Context: New to Engineering Manager, managing 5 people and working in a 5 person team. My managees are not 100% on my team.
Details: OK, so I've quickly learnt how to spot mistakes and follow up improvements to both teams (one I manage and one I work on). I'm confident taking actions and communicating on all of that. But there is the other side -> congratulation and following up on behavior/action.
Example: The current team has low velocity. They recently finished the specs and review. It didn't happen for months (always late on that), but it's their "normal" velocity. I congratulated them, but I'm wondering if I should have since they "just did their job".
How do you congratulate your coworkers? Specifically
Welcome to the final installment of our Deno series, where we build a script that pairs up people for coffee.
In the last post, we added the ability to post messages into a Slack channel instead of copying from a console window.
The current major problem is that we have to remember to run the script. We could always set up a cron job or scheduled task, however, what happens when we change machines? What if our computer stops working? What if someone else changes the script, how will we remember to get the latest and run it?
Welcome to the third installment of our Deno series, where we build a script that pairs up people for coffee.
In the last post, we're dynamically pulling members of the Justice League from GitHub instead of a hardcoded list.
Like any good project, this approach works, but now the major problem is that we have to run the script, copy the output, and post it into our chat tool so everyone knows the schedule.
It'd be better if we could update our script to post this message instead. In this example, we're going to use Slack and their incoming webhook, but you could easily tweak this approach to work with other tools like Microsoft Teams or Discord.
For this step, we'll follow the instructions in the docs, ensuring that we're hooking it up to the right channel.
After following the steps, you should see something like the following:
We can test that things are working correctly by running the curl command provided. If the message Hello World appears in the channel, congrats, you've got the incoming webhook created!
With this secret added, we can write a new function, sendMessage, that'll make the POST call to Slack. Since this is a new integration, we'll add a new file, slack.ts to put it in.
// Using axiod for the web connectionimportaxiodfrom"https://deno.land/x/axiod@0.26.2/mod.ts";// function to send a message to the webhookasyncfunctionsendMessage(message:string):Promise<void>{// Get the webhookUrl from our environmentconstwebhookUrl=Deno.env.get("SLACK_WEBHOOK")!;try{// Send the POST requestawaitaxiod.post(webhookUrl,message,{headers:{"Content-Type":"application/json",},});}catch(error){// Error handlingif(error.response){returnPromise.reject(`Failed to post message: ${error.response.status}, ${error.response.statusText}`);}returnPromise.reject("Failed for non status reason "+JSON.stringify(error));}}export{sendMessage};
With sendMessage done, let's update index.ts to use this new functionality.
import{load}from"https://deno.land/std@0.195.0/dotenv/mod.ts";import{GetOrganizationMemberResponse,getMembersOfOrganization,}from"./github.ts";import{sendMessage}from"./slack.ts";import{Pair,createPairsFrom,shuffle}from"./utility.ts";awaitload({export:true});// Replace this with your actual organization nameconstorganizationName="JusticeLeague";constnames=awaitgetMembersOfOrganization(organizationName);constpairs=createPairsFrom(shuffle(names));constmessage=createMessage(pairs);// Slack expects the payload to be an object of text, so we're doing that here for nowawaitsendMessage(JSON.stringify({text:message}));functioncreateMessage(pairs:Pair<GetOrganizationMemberResponse>[]):string{constmapper=(p:Pair<GetOrganizationMemberResponse>):string=>`${p.first.login} meets with ${p.second.login}${p.third?` and ${p.third.login}`:""}`;returnpairs.map(mapper).join("\n");}
And if we were to run the above, we can see the following message get sent to Slack.
Nice! We could leave it here, but we could make the message prettier (having an unordered list and italicizing names), so let's work on that next.
So far, we could leave the messaging as is, however; it's a bit muddled. To help it pop, let's make the following changes.
Italicize the names
Start each pair with a bullet point
Since Slack supports basic Markdown in the messages, we can use the _ for italicizing and - for the bullet points. So let's modify the createMessage function to add this formatting.
functioncreateMessage(pairs:Pair<GetOrganizationMemberResponse>[]):string{// Let's wrap each name with the '_' characterconstformatName=(s:string)=>`_${s}_`;constmapper=(p:Pair<GetOrganizationMemberResponse>):string=>// and start each pair with '-'`- ${formatName(p.first.login)} meets with ${formatName(p.second.login)}${p.third?` and ${formatName(p.third.login)}`:""}`;returnpairs.map(mapper).join("\n");}
By making this small change, we now see the following message:
The messaging is better, but we're still missing some clarity. For example, what date is this for? Or what's the purpose of the message? Looking through these docs, it seems like we could add different text blocks (like titles). So let's see what this could look like.
One design approach is to encapsulate the complex logic for dealing with Slack and only expose a "common-sense" API for consumers. In this regard, I think using a Facade pattern would make sense.
We want to expose the ability to set a title and to set a message through one or more lines of text. Here's what that code would look like
// This class allows a user to set a title and lines and then use the// 'build' method to create the payload to interact with SlackclassMessageFacade{// setting some default valuesprivateheader:string;privatelines:string[];constructor(){this.header="";this.lines=[];}// I like making these types of classes fluent// so that it returns itself.publicsetTitle(title:string):MessageFacade{this.header=title;returnthis;}publicaddLineToMessage(line:string|string[]):MessageFacade{if(Array.isArray(line)){this.lines.push(...line);}else{this.lines.push(line);}returnthis;}// Here's where we take the content that the user provided// and convert it to the JSON shape that Slack expectspublicbuild():string{// create the header block if set, otherwise nullconstheaderBlock=this.header?{type:"header",text:{type:"plain_text",text:this.header,emoji:true},}:null;// convert each line to it's own sectionconstlineBlocks=this.lines.map((line)=>({type:"section",text:{type:"mrkdwn",text:line},}));returnJSON.stringify({// take all blocks that have a value and set it hereblocks:[headerBlock,...lineBlocks].filter(Boolean),});}}
With the facade in place, let's look at implementing this in index.ts
// ... code to get the pairs and formatted lines// using the facade with the fluent syntaxconstmessage=newMessageFacade().setTitle(`☕ Random Coffee ☕`).addLineToMessage(formattedPairs).build();awaitsendMessage(message);
When we run the script now, we get the following message:
In this post, we changed our script from posting its Random Coffee message to the console window to instead posting it into a Slack channel using an Incoming Webhook. By making this change, we were able to remove a manual step (e.g., us copying the message into the channel), and we were able to use some cool emojis and better formatting.
In the final post, we'll take this one step further by automating the script using scheduled jobs via GitHub Actions.
As always, you can find a full working version of this bot on my GitHub.