Tips and Tricks with TypeScript

One of my most recent projects has been tackling how to model the card game, Love Letter. For those who've seen me present my How Functional Programming Made Me a Better Developer talk, you might recall that this was my second project to tackle in F# and that even though I was able to get some of it to work, there were some inconsistency in the rules that I wasn't able to reason about.

While implementing in TypeScript, I came across some cool tricks and thought I'd share some of them here, enjoy!

Swapping two variables

A common enough task in programming, we need to swap two values around. When I was learning C++, I had a memory trick to remember the ordering (think like a zigzag pattern)

1
2
3
4
5
6
7
int a = 100;
int b = 200;

// Note how a starts on the right, then goes left
int temp = a;
a = b;
b = temp;

You can do the same thing in TypeScript, however, you can remove the need for the temp variable by using array destructuring. The idea is that we create an array which contains the two variables to swap. We then assign this array to destructured variables (see below)

1
2
3
4
5
6
let a:number = 100;
let b:number = 200;

// using array destructuring

[a,b] = [b,a]

Drawing a Card

One of the common interactions that happens in the game is that we need to model drawing a card from a deck. As such, we will have a function that looks like the following

1
2
3
4
5
6
7
8
type Card = "Guard" | "Priest" | "Baron" | .... // other options omitted for brevity
type Deck = Card[]

function draw(d:Deck): {card:Card, restOfDeck:Deck} {
  const card = d[0];
  const restOfDeck = d.slice(1);
  return {card, restOfDeck};
}

This absolutely works, however, we can use the rest operator (...) with array destructuring to simplify this code.

1
2
3
4
5
6
7
type Card = "Guard" | "Priest" | "Baron" | .... // other options omitted for brevity
type Deck = Card[]

function draw(d:Deck): {card:Card, restOfDeck:Deck} {
  const [card, ...restOfDeck] = d; // note the rest operator before restOfDeck
  return {card, restOfDeck};
}

This code should be read as assign the first element of the array to a const called card and the rest of the array to a const called restOfDeck.

Drawing Multiple Cards with Object Destructuring

function draw(d:Deck): {card:Card, restOfDeck:Deck} {
  const [card, ...restOfDeck] = d;
  return {card, restOfDeck};
}

function drawMultipleCads(d:Deck, count:number): {cards:Card[], restOfDeck:Deck} {
  let currentDeck = d;
  const cards:Card[] = [];
  for (let i = 0; i < count; i++) {
    const result = draw(currentDeck)
    cards.push(result.card)
    currentDeck = result.restOfDeck
  }
  return {cards, restOfDeck:currentDeck};
}

This works, however, we don't actually care about result, but the specific values card and restOfDeck. Instead of referencing them via property drilling, we can use object destructuring to get their values.

function drawMultipleCads(d:Deck, count:number): {cards:Card[], restOfDeck:Deck} {
  let currentDeck = d;
  const cards:Card[] = [];
  for (let i = 0; i < count; i++) {
    const {card, restOfDeck} = draw(currentDeck) // note using curly braces
    cards.push(card)
    currentDeck = restOfDeck
  }
  return {cards, restOfDeck:currentDeck};
}

Today I Learned: Triggering PowerAutomate Through Http Request

If you're working within the Microsoft ecosystem, you might have ran across a tool called PowerAutomate, this is Microsoft's low-code solution for building out simple automation (similar to tools like If This Then Than (IFTTT)).

A common use I've used with PowerAutomate is integrating with Teams to kick off a workflow when someone posts a message. For example, you could have a trigger listening for when someone posts in your support channel to auto-respond back with a canned message and tagging whoever your support person or support team is.

The problem I've ran into with PowerAutomate is that it has a steep learning curve and isn't very intuitive for engineers to pick up. As such, we typically use PowerAutomate for simpler flows (or if we need to integrate with another Microsoft product like Teams or Planner).

Recently, I was tackling a problem where we had an application that needed to post a message into a Teams channel. Back in the day, I would have use a webhook and published the message that way, however, Microsoft removed the ability to use Webhooks into Teams in late 2024. The main solution going forward is that you would have had to build a Teams application and install it for your tenant/channel. This can be a bit annoying if all you have is a simple workflow that you need to stand-up.

This got me thinking, I know I can use PowerAutomate to send a message to Teams, but can I trigger the flow programmatically (at this point, I was familiar with timers and when something happened).

Doing some digging, I found that PowerAutomate can trigger a workflow off of a HTTP Request.

Jackpot!

Even better, we can give the PowerAutomate flow a specific payload to expect as part of the request body, even better!

Combining these concepts, I can create a flow like the following:

First, let's set up a trigger that takes in a payload. In this case, the caller will give us the message content and a bit more info on which team and channel to post this message into.

Setting up the trigger

After setting up the trigger, we can then add a step for posting a message in Teams. Instead of using the pre-populated options that we get from our connection, we can use the values from the trigger instead.

Setting up the message step

After saving, our entire flow looks like the following

Full flow

From here, we can then use the URL that was created as part of the trigger and have our application invoke the flow.

Today I Learned: Configuring Git to Use A Specific Key for a Specific Repo

When working with a Git repository, there are two ways to authenticate yourself, either by using your name/password or by leveraging an SSH key, proving who you are. I like the SSH key the best since I can create the key for my machine, associate it to my profile, and then I'm good to go.

However, the problem becomes when I have to juggle different SSH keys. For example, I may have two different accounts (personal and a work one) and these accounts can't use the same SSH key (in fact, if you try to add the same SSH key to two different accounts, you'll get an error message).

In these cases, I'd like to be able to specify which SSH key to use for a given repository.

Doing some digging, I found the following configures the sshCommand that git uses

git config core.sshCommand 'ssh -i C:\\users\\<name>\\.ssh\\<name_of_private_key>'

Because we're not specifying a scope (like --global), this will only apply to the single repository.

Today I Learned: Configuring HttpClient via Service Registration

When integrating with an external service via an API call, it's common to create a class the encapsulates dealing with the API. For example, if I was interacting with the GitHub API, I might create a C# class that wraps the HttpClient, like the following:

public interface IGitHubService
{
    Task<string> GetCurrentUsername();
}

public class GitHubService : IGitHubService
{
    private readonly HttpClient _client;
    public GitHubService(HttpClient client)
    {
        _client = client;
    }
    public async Task<string> GetCurrentUsername()
    {
        // code implementation
    }
}

Repetition of Values

This is a great start, but over time, your class might end up like the following:

public class GitHubService
{
    private readonly HttpClient _client;
    public GitHubService(HttpClient client)
    {
        _client = client;
    }
    public async Task<string> GetCurrentUsername()
    {
        var result = _client.GetFromJsonAsync("https://api.github.com/user")
        return result.Login;
    }
    public async Task<List<string>> GetAllUsers()
    {
        var result = _client.GetFromJsonAsync("https://api.github.com/users");
        return result.Select(x => x.Login).ToList();
    }
    public async Task<List<string>> GetTeamNamesForOrg(string org)
    {
        var result = _client.GetFromJsonAsync($"https://api.github.com/orgs/{org}/teams");
        return result.Select(x => x.Name).ToList();
    }
}

Right off the bat, it seems like we're repeating the URL for each method call. To remove the repetition, we could extract to a constant.

public class GitHubService
{
    private readonly HttpClient _client;
    // Setting the base URL for later usage
    private const string _baseUrl = "https://api.github.com";
    public GitHubService(HttpClient client)
    {
        _client = client;
    }
    public async Task<string> GetCurrentUsername()
    {
        var result = _client.GetFromJsonAsync($"{_baseUrl}/user")
        return result.Login;
    }
    public async Task<List<string>> GetAllUsers()
    {
        var result = _client.GetFromJsonAsync($"{_baseUrl}/users");
        return result.Select(x => x.Login).ToList();
    }
    public async Task<List<string>> GetTeamNamesForOrg(string org)
    {
        var result = _client.GetFromJsonAsync($"{_baseUrl}/orgs/{org}/teams");
        return result.Select(x => x.Name).ToList();
    }
}

This helps remove the repetition, however, we're now keeping track of a new field, _baseUrl. Instead of using this, we could leverage the BaseAddress property and set that in the service's constructor.

public class GitHubService
{
    private readonly HttpClient _client;
    public GitHubService(HttpClient client)
    {
        _client = client;
        _client.BaseAddress = "https://api.github.com"; // Setting the base address for the other requests.
    }
    public async Task<string> GetCurrentUsername()
    {
        var result = _client.GetFromJsonAsync("/user")
        return result.Login;
    }
    public async Task<List<string>> GetAllUsers()
    {
        var result = _client.GetFromJsonAsync("/users");
        return result.Select(x => x.Login).ToList();
    }
    public async Task<List<string>> GetTeamNamesForOrg(string org)
    {
        var result = _client.GetFromJsonAsync($"/orgs/{org}/teams");
        return result.Select(x => x.Name).ToList();
    }
}

I like this refactor because we remove the field and we have our configuration in one spot. That being said, interacting with an API typically requires more information than just the URL. For example, setting up the API token or that we're always expecting JSON for the response. We could add the header setup in each method, but that seems quite duplicative.

Leveraging Default Request Headers

We can centralize our request headers by leveraging the DefaultRequestHeaders property and updating our constructor.

public class GitHubService
{
    private readonly HttpClient _client;
    public GitHubService(HttpClient client)
    {
        _client = client;
        _client.BaseAddress = "https://api.github.com";
        _client.DefaultRequestHeaders.Add("Accept", "application/vnd.github+json");
        _client.DefaultRequestHeaders.Add("Authentication", $"Bearer {yourTokenGoesHere}");
        _client.DefaultRequestHeaders.Add("X-GitHub-Api-Version", "2022-11-28");
    }
    public async Task<string> GetCurrentUsername()
    {
        var result = _client.GetFromJsonAsync("/user")
        return result.Login;
    }
    public async Task<List<string>> GetAllUsers()
    {
        var result = _client.GetFromJsonAsync("/users");
        return result.Select(x => x.Login).ToList();
    }
    public async Task<List<string>> GetTeamNamesForOrg(string org)
    {
        var result = _client.GetFromJsonAsync($"/orgs/{org}/teams");
        return result.Select(x => x.Name).ToList();
    }
}

I like this refactor because all of our configuration of the service is right next to how we're using it, so easy to troubleshoot. At this point, we would need to register our service in the Inversion of Control (IoC) container and then everything would work.

Generally, you'll find this in Startup.cs and would look like:

services.AddTransient<IGitHubService, GitHubService>();

An Alternative Approach for Service Registration

However, I learned that when you're building a service that's wrapping an HttpClient, there's another service registration method you could use, AddHttpClient with the Typed Client approach.

Let's take a look at what this would look like.

1
2
3
4
5
6
7
8
// In Startup.cs

services.AddHttpClient<IGitHubService, GitHubService>(client => {
    client.BaseAddress = new Uri("https://api.github.com");
    client.DefaultRequestHeaders.Add("Accept", "application/vnd.github+json");
    client.DefaultRequestHeaders.Add("Authorization", $"Bearer {apiTokenGoesHere}");
    client.DefaultRequestHeaders.Add("X-GitHub-Api-Version", "2022-11-28");
});

We've essentially moved our configuration logic from the GitHubService to the IoC container, simplifying the service.

public class GitHubService : IGitHubService
{
    private readonly HttpClient _client;
    public GitHubService(HttpClient client)
    {
        _client = client;
    }
    public async Task<string> GetCurrentUsername()
    {
        var result = _client.GetFromJsonAsync("/user")
        return result.Login;
    }
    public async Task<List<string>> GetAllUsers()
    {
        var result = _client.GetFromJsonAsync("/users");
        return result.Select(x => x.Login).ToList();
    }
    public async Task<List<string>> GetTeamNamesForOrg(string org)
    {
        var result = _client.GetFromJsonAsync($"/orgs/{org}/teams");
        return result.Select(x => x.Name).ToList();
    }
}

My Thoughts

Even though this is a new approach, I'm kind of torn if I like it or not. On one hand, I appreciate that we can centralize the logic in one spot so that everything for the GitHubService is one spot. However, if we needed other dependencies to configure the service (for example, we needed to get the bearer token from AppSettings), I could see this getting a bit more complicated, though contained.

On the other hand, we could shift all that config to the IoC and let it deal with that. It definitely streamlines the GitHubService so we can focus on the endpoints and their logic, however, now I've got to look for two spots to see where the client is being configured.

Year in Review: 2024

As the year comes to a close, I like to take this time to reflect on the past year: what got done, how do I feel like things are progressing, and are there any changes that I should be working on for the next year.

With that in mind, I generally break my goals into three sections: what progress did I make for my career? what progress did I make for the community, and finally, what progress did I make for myself?

Professional

This year, I focused more on learning about consulting and what a successful engagement looks like. As such, this year was a good year as I've had experience working with both larger companies (Fortune 500) and smaller companies. All in all, I find consulting to be a good application of my skills and is a natural inclination for my process improvement skills.

  • Completed engagement with a Fortune 500 client to teach TypeScript and AWS (members were coming from COBOL)
  • Mentored two interns and helped get them hired full-time
  • Designed a system for custom form creation for an education company (which led to follow-up development work)
  • Started coaching engagement for an AgTech company focusing on delivery and process improvements
  • Designed a system for database health checks for a Database Administrator as a Service company

To help Lean TECHniques with their Microsoft partnership, I spent time over the summer working on obtaining various Microsoft certifications (I ran out of time before getting the Microsoft Certified: Azure Solutions Architect Expert certificate, but that's on my docket for 2025)

Last, but not least, I spent some time learning some (new to me tech) like GraphQL and basic Java.

Community

On the community side, 2024 was a year of firsts. For example, I hosted and organized Queen City Code Camp, a conference where attendees learned the fundamentals of Test Driven Development (TDD) by solving the Mars Rover kata. This was the first time since 2019 that I organized a conference (and absolutely the first time since I moved to North Carolina), so lots of good lessons learned here.

In another first, I created my first training video, Navigating Mars Using Functional Programming in TypeScript, a Udemy course where I walk you through how to solve the Mars Rover kata using functional programming concepts.

Lastly, I created two new talks this year: How to Make Your Own Automation Framework where I show you how to build your own framework using Deno, TypeScript, and GitHub Action; and How to Build More Resilient Teams where I show you how to use post-mortem and experiments to build high performing teams. In general, I shoot to create one new talk a year and I over-delivered on that goal this year (with 2025 already looking to be in good shape).

Outside of new talks, I authored 23 different blogs posts with my most common topics being Leadership and Functional Programming and presented at 11 different events.

Personal

  • Read 32 books in various genres (biographical, investing, leadership, history, consulting, process, science fiction, and general fiction)
  • Learned the basics of knitting, made scarves and baby blankets using both seed and garter stitches
  • Learned the basics of working with chisels and planes (including sharpening) for woodworking
  • Built a catio for our new cat, Trix. Catio for Tix

  • Learned how to make two different kinds of coffee drinks (Americano and Latte)

  • Built my first computer from components (major shout-out to Isaac for helping me figure out the components)
  • Picked up over 30 bags of trash around the neighborhood
  • Volunteered with Cub Scout Pack 58

What Does 2025 Have in Store?

Obtain Three Client Engagements

I've spent plenty of time at step 1 (collect information), but it's now time to start putting that information into use (for better or for worse) and get my own clients. If I'm going to be successful, then I need to apply what I learn, make mistakes, mess up, learn, and iterate. As such, I'm going to expand my connections here in Charlotte, honing in on my ideal client. From there, I can expand and see what makes sense.

Author Two Training Courses

I learned a lot about authoring a training course by going through the Udemy process. Though it has a lower barrier to entry to get started (and they handle a good amount of the marketing), the fact that the pricing can fluctuate so much is a bit bonkers. For example, I have the course listed at $99.99, with a coupon that can bring the price down to $49.99. However, Udemy can then cut the price down to $9.99. On one hand, it increases sales, however, the amount of money made (both for Udemy and myself) is not enough to to cover the cost.

As such, I'll be spending time finding a new platform for 2025 where I can host my training such that the pricing is a bit more in my control. (Udemy, if you could allow course authors to set a floor on their pricing, I'd be so much happier with the service!)

Build a Hope Chest

I come from a long line of makers. My dad worked in construction and manufacturing, various uncles worked as welders, fabricators, and construction, and my grandfather was a carpenter (and moonshiner, but that's a tale for a different day). I've always been fascinated about how things work and how to make them better, hence why I started my journey into woodworking during the pandemic (idle time and I had the space for it).

I spend my work hours building things that either won't be seen by the general public or I can't talk about the project at all, so it can be difficult to explain what I do. Plus, you can't really touch software or applications, you can use them.

Enter woodworking. I can build something with my own two hands and give it to a friend, keep it for myself, or as a lesson for me going forward. I've gotten some experience building small boxes using power tools last year, but I haven't built anything of a certain size and definitely not with hand tools.

My last major goal for 2025 is to build a hope chest as this is a solid piece of fine furniture and it can be something I can pass down to my children when it comes time.

Building this is would be tougher than anything I've built up to this point, however, it should be in the realm of my knowledge. The challenge with this build will be using strictly hand tools to build it out. It'll take me longer and it probably won't be as nice as if I had used machines, but at the end of the day, I can say that I made it.

Functional Design Patterns - Missing Values

When working in software, a common problem that we run into is what should we do if we can't get a value or compute an answer.

For example, let's say that you go to the library to checkout the latest book from Nathalia Holt, but when you search for it, you realize that they don't have any copies in stock.

How do you handle the absence of value? From an Object Oriented Programming (OOP), you might be able to leverage a reasonable default or the Null Object Pattern. Another approach would be to return to always return a list and if there are no values, then the list would be empty.

No matter the choice you make, we need some way to model the lack of a value. Tony Hoare referred to null as his billion dollar mistake as its inclusion in languages have created countless bugs and errors.

In most functional languages, the idea of null doesn't exist, so how do they handle missing values?

Revisiting Functions

In a previous post, we mention that a function is a mapping between two sets such that every element on the left (first set) is mapped to a single element on the right (the second set). So what happens when we have a mapping that's not a function?

In these cases, we have two solutions to choose from:

  1. Restrict the function input to only be values that can be mapped (e.g., restrict the domain)
  2. Expand the possible values that the function can map to (e.g., expand the range)

When refactoring a mapping, I tend to stick with option 1 because if we can prevent bad data from being created or passed in, that's less error handling that needs to be written and it becomes simpler to reason about. However, there are plenty of times where our type system (or business rules) can't enforce the valid inputs.

Number Math

Let's say that we're working on a "word calculator" where the user can enter a phrase like "one thousand twelve" and it returns "1012" as output. If we think about our input set (domain) and outputs, we would have the following (mapping from string to number).

Mapping from string to number

The issue is that we can't guarantee that the strings the user gives us would represent a number (for example, how would you convert "platypus" to a number?)

Since we can't restrict the input, we have to expand the output. So let's update our output to be number | 'invalid'

Mapping from string to number or 'invalid'

With this change, anytime we call convertToNumber, we'll need to add some handling on what to do if the output is invalid.

Here's some pseudocode of what this mapping could look like

type WordCalculatorResult = number | 'invalid'

function convertToNumber(phrase:string): WorldCalculatorResult {
    if (phrase === 'one') return 1;
    if (phrase === 'two') return 2;
    // ... other logic
    return 'invalid';
}

console.log(convertToNumber('one')); // prints "1";
console.log(convertToNumber('kumquats')) // prints invalid

Leaky Context

So far, so good as we have a function that converts words to numbers. Let's now say that we want to square a number. Writing such a function is straightforward.

1
2
3
function square (x:number): number {
    return x*x;
}

With both convertToNumber and square defined, let's try to combine these together:

Compilation error when combining convertToNumber and Square

This doesn't work because convertToNumber can return invalid which there's no way for the square function to work with that. Due to this limitation, we could rewrite the code to check for invalid before calling square.

1
2
3
4
5
const result = convertToNumber('one');

if (result !== 'invalid') {
    console.log(square(result));
}

This approach will work, but the downside is that we need to add error handling logic in our pipeline now. If this was the only place where it's being used, then it's not too bad. However, if we were using convertToNumber in multiple places, then we have to remember to add this error handling logic. In fact, we would have almost the same if check everywhere.

Let's take a look at a different way of solving this problem by using the Maybe type.

Introducing the Maybe Type

In our original approach, we already had a type called WordCalculatorResult for the computation. Instead of it being a number or invalid, we could instead create a new type, called Maybe that looks like the following:

type Maybe<T> = { label: 'some', value: T } | { label: 'none' }

By leveraging generics and tagged unions, we have a way to represent a missing value for any type. With this new definition, we can rewrite the convertToNumber function to use the Maybe type.

1
2
3
4
5
6
function convertToNumber(phrase:string): Maybe<number> {
    if (phrase === 'one') return {label:'some', value:1};
    if (phrase === 'two') return {label:'some', value:2};
    // other logic
    return {label:'none'};
}

I don't know about you, but typing out {label:'some'} is going to get tiring, so let's create two helper functions, some and none that handle putting the right label if we have a value or not.

1
2
3
4
5
6
7
function some<T>(value:T): Maybe<T> {
    return {label:'some', value};
}

function none<T>(): Maybe<T> {
    return {label:'none'};
}

Which allows us to replace all of the {label:'some', value:x} with some(x) and {label:'none'} with none().

1
2
3
4
5
6
function convertToNumber(phrase:string): Maybe<number> {
    if (phrase === 'one') return some(1);
    if (phrase === 'two') return some(2);
    // other logic
    return none();
}

Now that convertToNumber returns a Maybe, our pipeline would look like the following:

1
2
3
4
5
const result = convertToNumber('one');

if (result.label === 'some') { // checking if we're a Some type
    console.log(square(result.value));
}

Introducing Transformations With Map

Now the problem we're wanting to solve is to get rid of having to do an if check on some in order to call the square function. Given that we're wanting to transform our number into a different number, we can write another function, called map that takes two parameters: the Maybe we want transformed and the function that knows how to transform it. Let's take a look at how to write such a function.

1
2
3
4
5
6
7
function map<T,U>(m:Maybe<T>, f:(t:T)=>U): Maybe<U> {
    if (m.label === 'some') { // Is Maybe a Some?
        const transformedValue = f(m.value); // Grab the value and transform it by calling function f
        return some(transformedValue) // return a new Maybe with the transformed value
    }
    return none(); // Return none since there's no value to transform
}

With map implemented, let's update our code to use it!

const convertResult = convertToNumber('one'); // 
const transformedResult = map(convertResult, square);

if (transformedResult.label === 'some') { // checking if we're a Some type
    console.log(transformedResult.value);
}

/* The above could be simplified as the following
const result = map(convertToNumber('one'), square);
if (result.label === 'some') {
  console.log(result.value)
}
*/

Thanks to map and Maybe, we're able to write two pure functions (convertToNumber and square), glue them together, and delegate our error handling to one spot (instead of having to sprinkle this throughout the codebase).

Calling Side Affects Using Tee

With the addition of map, we've now simplified our if check to only call console.log now (instead of calling the square function then console.log). If we wanted to truly get rid of this if statement, then we need a way to invoke the console.log as a side effect. We don't want to combine console.log into square because square is business rules and we don't want to add side effect logic to our business rules.

Similar to what we did for map, let's add a new function, called tee, that takes in three arguments: a Maybe, a function to call if the Maybe has a value, and then a function to call if Maybe doesn't have a value.

Let's take a look at what an implementation would look like:

/*
 Inputs:
    m: A Maybe to work with
    ifSome: A function that takes in a T (which comes from the Maybe) and returns void
    ifNone: A function that takes no inputs and returns void
 Outputs:
    Returns m so that we can continue chaining calls
*/
function tee<T>(m:Maybe<T>, ifSome:(t:T)=>void, ifNone:()=>void): Maybe<T> {
    if (m.label === 'some') {
        ifSome(m.value);
    } else {
        ifNone();
    }
    return m;
}

With tee implemented, we can update our pipeline to take advantage of it:

const convertResult = convertToNumber('one'); // 
const transformedResult = map(convertResult, square);

const ifSome = (t:number) => console.log(`Result is ${t}`);
const ifNone = () => console.log('Failed to calculate result.');

tee(transformedResult, ifSome, ifNone);

// if we wanted to get rid of the temp values, we could do the following
// const result = tee(map(convertToNumber('one'), square)), ifSome, ifNone);

Cleaning Up by using an Object

We've now centralized the if logic to the functions that work on Maybes which is helpful, but as we're already seeing, the code can be hard to parse as you need to read from the outside in (which is very Clojure or Scheme in nature).

Instead of building up functions this way, we can instead create a TypeScript class and use a fluent interface pattern by doing the following:

export class Maybe<T>
{
  private constructor(private readonly value:T|undefined=undefined){}
  map<U>(f:(t:T)=>U): Maybe<U> {
    return this.value !== undefined ? Maybe.some(f(this.value)) : Maybe.none();
  }
  tee(ifSome:(t:T)=>void, ifNone:()=>void) {
    if (this.value !== undefined) {
      ifSome(this.value);
    } else {
      ifNone();
    }
    return this;
  }
  static some<T>(value:T): Maybe<T> {
    return new Maybe(value);
  }
  static none<T>(): Maybe<T> {
    return new Maybe();
  }
}
// Defined in index.ts
function convertToNumber(phrase:string): Maybe<number> {
    if (phrase === 'one') return Maybe.some(1);
    if (phrase === 'two') return Maybe.some(2);
    // other logic
    return Maybe.none();
}

function square(x:number): number {
    return x*x;
}

// By using the fluent interface pattern, we can chain methods together and have this code read
// more naturally
convertToNumber("one")
  .map(square)
  .tee(
    (v)=> `Answer is ${v}`, 
    () => "Unable to calculate result"
  );

Wrapping Up

When writing software, a common problem we run into is what to do when we can't calculate a value or a result. In this post, we looked at techniques to restrict the function's input or expand their output to handle scenarios. From there, we looked into a type called Maybe and how it can represent the absence of value, but still provide a way to remove having to explicitly write error handling code by consolidating the checks into a map call. Lastly, we look into taking the various functions that we wrote and combine them into a formal Maybe class that allows us to leverage a fluent interface and chain our interactions together.

Functional Design Patterns - Reduce and Monoids

When learning about a loops, a common exercise is to take an array of elements and calculate some value based on them. For example, let's say that we have an array of numbers and we want to find their sum. One approach would be the following:

1
2
3
4
5
const values = [1, 2, 3, 4, 5];
let total = 0;
for (const val in values) {
    total += val;
}

This code works and we'll get the right answer, however, there's a pattern sitting here. If you find yourself writing code that has this shape (some initial value and some logic in the loop), then you have a reducer in disguise.

Let's convert this code to use the reduce function instead.

const values = [1, 2, 3, 4, 5];
const total = values.reduce((acc, curr) => acc + curr, 0);

The reduce function takes two parameters, the reducer and the initial value. For the initial value, this is the value that the result should be if the array is empty. Since we're adding numbers together, zero makes the most sense.

The reducer, on the other hand, says that if you give me the accumulated result so far (called acc in the example) and an element from the array (called curr in the above example), what's the new accumulation?

Reduce is an extremely powerful tool (in fact, I give a presentation where we rewrite some of C#'s LINQ operators as reduce).

But there's another pattern sitting here. If you find that the initial value and elements of the array have the same type, you most likely have a monoid.

Intro to Monoids

The concept of Monoid comes from the field of category theory where a Monoid contains three things

  1. A set of values (you can think about this as a type)
  2. Some binary operation (e.g., a function that takes two inputs and returns a single output of said type)
  3. Some identity element such that if you pass the id to the binary operation, you get the other element back.

There's a lot of theory sitting here, so let's put it in more concrete terms.

If we have a Monoid over the a type A, then the following must be true:

1
2
3
4
5
6
7
8
function operation<A>(x:A, y:A): A 
{
    // logic for operation
}
const identity: A = // some value of type A
operation(x, identity) === operation(identity, x) === x

operation(x, operation(y, z)) === operation(operation(x, y), z)

Here's how we would define such a thing in TypeScript.

1
2
3
4
export interface Monoid<A>{
    operation: (x:A, y:A)=> A;
    identity:A;
}

Exploring Total

With more theory in place, let's apply it to our running total from before. It turns out that addition forms a Monoid over positive numbers with the following:

1
2
3
4
const additionOverNumber: Monoid<number> = {
    identity: 0,
    operation: (a:number, b:number) => a+b;
}

Wait a minute... This looks like what the reduce function needed from before!

In the case where we have a Monoid, we have a way of reducing an array to a single value for free because of the properties that Monoid gives us.

Exploring Booleans

Thankfully, we're not limited to just numbers. For example, let's take a look at booleans with the && and || operators.

In the case of &&, that's our operation, so now we need to find the identity element. In other words, what value must id be if the following statements are true?

1
2
3
const id: boolean = ...
id && true === true
id && false === false

Since id has to be a boolean, the answer is true. Therefore, we can define our Monoid like so

1
2
3
4
const AndOverBoolean: Monoid<boolean> = {
    identity: true,
    operation: (a:boolean, b:boolean) => a && b
}

With this monoid defined, let's put it to use. Let's say that we wanted to check if every number in a list is even. We could write the following:

1
2
3
const values = [1, 2, 3, 4, 5];

const areAllEven = values.map(x=>x%2===0).reduce(AndOverBoolean.operation, AndOverBoolean.identity);

Huh, that looks an awful lot like how we could have used every.

const values = [1, 2, 3, 4, 5];
const areAllEven = values.every(x=>x%2===0);

Let's take a look at the || monoid now. We have the operation, but we now we need to find the identity. In other words, what value must id be if the following statements are true?

1
2
3
const id: boolean = ...;
id || true === true
id || false === false

Since id has to be a boolean, the answer is false. Therefore, we can define our monoid as such.

1
2
3
4
const OrOverBoolean: Monoid<boolean> = {
    identity: false,
    operation: (x:boolean, y:boolean) => x || y
} 

With this monoid defined, let's put it to use. Let's say that we wanted to check if some number in a list is even. We could write the following:

1
2
3
const values = [1, 2, 3, 4, 5];

const areAllEven = values.map(x=>x%2===0).reduce(OrOverBoolean.operation, OrOverBoolean.identity);

Similar to the AndOverBoolean, this looks very similar to the code we would have written if we had leveraged some.

1
2
3
const values = [1, 2, 3, 4, 5];

const isAnyEven = values.some(x => x%2 === 0);

Wrapping Up

When working with arrays of items, it's common to need to reduce the list down to a single element. You can start with a for loop and then refactor to using the reduce function. If the types are all the same, then it's likely that you also have a monoid, which can give you stronger guarantees about your code.

Today I Learned - Leveraging Mock Names with Jest

I was working through the Mars Rover kata the other day and found myself in a predicament when trying to test one of the functions, the convertCommandToAction function.

The idea behind the function is that based on the Command you pass in, it'll return the right function to call. The code looks something like this.

type Command = 'MoveForward' | 'MoveBackward' | 'TurnLeft' | 'TurnRight' | 'Quit'
type Action = (r:Rover):Rover;

const moveForward:Action = (r:Rover):Rover => {
  // business rules
}
const moveBackward:Action = (r:Rover): Rover => {
  // business rules
}
const turnLeft:Action = (r:Rover):Rover => {
  // business rules
}
const turnRight:Action = (r:Rover): Rover => {
  // business rules
}
const quit:Action = (r:Rover):Rover => {
  // business rules
}

// Function that I'm wanting to write tests against.
function convertCommandToAction(c:Command): Action {
  switch (c) {
    case 'MoveForward': return moveForward;
    case 'MoveBackward': return moveBackward;
    case 'TurnLeft': return turnLeft;
    case 'TurnRight': return turnRight;
    case 'Quit': return quit;
  }
}

I'm able to write tests across all the other functions easily enough, but for the convertCommandToAction, I needed some way to know which function is being returned.

Since I don't want the real functions to be used, my mind went to leveraging Jest and mocking out the module that the actions were defined in, yielding the following test setup.

import { Command } from "./models";
import { convertCommandToAction, convertStringToCommand } from "./parsers";

jest.mock("./actions", () => ({
  moveForward: jest.fn(),
  moveBackward: jest.fn(),
  turnLeft: jest.fn(),
  turnRight: jest.fn(),
  quit: jest.fn(),
}));

describe("When converting a Command to an Action", () => {
  it("and the command is MoveForward, then the right action is returned", () => {
    const result = convertCommandToAction("MoveForward");

    // What should my expect be?
    expect(result);
  });
});

One approach that I have used in the past is jest's ability to test if a function is a mocked function, however, that approach doesn't work here because all of the functions are being mocked out. Meaning, that my test would pass, but if I returned moveBackward instead of moveForward, my test would still pass (but now for the wrong reason). I need a way to know which function was being returned.

Doing some digging, I found that the jest.fn() has a way of setting a name for a mock by leveraging the mockName function. This in turn allowed me to change my setup to look like this.

1
2
3
4
5
6
7
jest.mock("./actions", () => ({
  moveForward: jest.fn().mockName('moveForward'),
  moveBackward: jest.fn().mockName('moveBackward'),
  turnLeft: jest.fn().mockName('turnLeft'),
  turnRight: jest.fn().mockName('turnRight'),
  quit: jest.fn().mockName('quit'),
}));
Note: It turns out that the mockName function is part of a fluent interface, which allows it to return a jest.Mock as the result of the mockName call

With my setup updated, my tests can now check that the result has the right mockName.

1
2
3
4
5
6
7
8
9
describe("When converting a Command to an Action", () => {
  it("and the command is MoveForward, then the right action is returned", () => {

    // have to convert result as a jest.Mock to make TypeScript happy
    const result = convertCommandToAction("MoveForward") as unknown as Jest.Mock;

    expect(result.getMockName()).toBe("moveForward");
  });
});

Wrapping Up

If you find yourself writing functions that return other function (i.e., leveraging functional programming concepts), then you check out using mockName for keeping track of which functions are being returned.

An Alternative Approach to Sprint Planning - Introducing Hippogriff

Background

One of my first jobs in engineering was working for a medical device company. This is was pretty cool as I wrote software that interacted with the device to show measurements and give recommendations to the physicians about those measurements.

To ensure that things were working correctly, we had a department called V&V (Validation and Verification). I had never heard of this term before in my career, so my boss told me that it was responsible for ensuring that we built the right thing because it's engineering's job to build it right.

These two principles (build the right thing and build it right) have stuck with me during my career. So much so, I believe this may have been the start of my interest in product engineering and wanting to understand the "why" behind the stuff that I build.

It was the same job that I was introduced to the concept of Kanban, and the idea of eliminating waste from our process as it inherently slows us down by focusing on the wrong things. I'm known to be a process improver at heart and the idea of Kaizen (continuous improvement) resonates with me.

So when I think about how many teams are tackling work breakdown and estimating, I can't help but think that we're spending our time on the wrong things and not enough on the right things.

My Experiences with Engineering Teams Today

A common piece of feedback with teams that are following Scrum principles is that they feel like they are in meetings all the time and there's no time to do the actual work. As someone who has a love/hate relationship with meetings, this is totally understandable.

There are a set number of meetings ("activities" in Scrum parlance) that teams follow, one of which is sprint planning, a time to make sure that what the team is working on is aligned with priorities from product.

While I find value in the synchronization with product, I don't find very much value in the estimation portion of the meeting as it gets the focus on the wrong things.

If we look at the value that estimates provide, the goal is to ensure that the team is taking on a reasonable load for the sprint (e.g., what are our commitments) and not over-extending as that can cause burn-out.

I have two problems with estimations. First, they're way too easy to be taken as a deadline (so much so, that you've probably seen this meme spread around). Which in turn, causes the team to go deeper into estimating stories (breaking them down to single tasks), which causes a feedback loop.

If this keeps up, you'll eventually find yourself going into Waterfall mode where the team needs every single requirement up front before they can do any development, the opposite of what we're trying to do.

Instead of the team focusing on estimations, I'd much rather see them strategize on an approach to the problem and let that be the driver to the work.

One approach that I've seen teams take is to use "relative sizing", so instead of saying that work will take four hours, they might say that it's a "2" (if using the Fibonacci approach) or it's a "Small" (if using T-Shirt sizing).

Side Note: I've also seen fruit and animals as relative sizing options.

The problem I have with relative sizing is that it can be helpful for the team, it's total nonsense for business stakeholders. For example, let's say that we're working on the new Shiny Widget 9000 and Marketing wants to know when it's gong to be ready so that they can start getting the marketing materials ready and start promotion. You can't tell them that it's going to take 108 story points or 16 Mediums as that is meaningless.

They need a timeline, so what does engineering do? They look up historic trends for the team and say something like "We typically get 4 Mediums done in 2 weeks, so we're looking at 2 months, give or take". Which means that we're correlating a relative size to a unit of time. So what value did we gain?

When I think about planning, I'm focusing on stories that are independent, deliver value, and can be accomplished within a sprint. I don't necessarily care if the story is a 2, 3, or 5 as long as the team has a rough approach to the work and understands why we need the functionality.

A Different Approach

Instead of spending time on estimates, what if we approached planning this way?

  1. Product and Engineering work together to break priorities down to smaller items that can be accomplished in a sprint (remembering to keep them independent).
  2. Team works together to move the stories in priority order (taking into account dependencies).
  3. Team takes on the first item.
  4. Product and Engineering can move/re-prioritize items as needed, but can't modify work in flight.
  5. As an item is completed, the team takes on the next important item.

This focuses the meeting on the important things (what's important for us to work on, how would we approach it) and less on the things that don't matter (fine-tuned estimations).

For those keeping track, this sounds pretty similar to Kanban (which it absolutely is 😀), but if you're looking for a catchy name, you can call this the Highest Priority Goes First ( or Hippogriff) framework.

Long story short, I'd like to see teams spending less time on estimations and more time on figuring out what the problem is, solutioning, getting feedback, and iterating.

Isn't that what the Agile Manifesto is all about?

While working on this post, I found this article by Mountain Goat Software to be immensely helpful in capturing the goals of sprint planning and some external validation that I'm not the only one who's experienced this problem as well.

Today I Learned: Changing the Entrypoint for a Docker Container

A common pattern for building software today is to develop using containers. This is a great strategy because it can ensure that everyone is building/deploying the same environment every time. However, like any other tooling, it can take a bit to create the file and make sure that you're setting it up correctly.

When I'm building a container, I typically will run bash in the container so I can inspect files, paths, permissions and more.

For example, let's say that I wanted to see what all is in the node image, I could run the following

docker run -it node:22 bash

And as long as the node image has bash installed, I can take a look.

This trick works just fine when I have access to the Dockerfile and can change it. But what if you didn't have access to the Dockerfile? How would you troubleshoot?

The Scenario

Let's say that I'm using a container that was created from another team, called deps. When I try to run it though, I get the following error:

docker run -it deps:latest

Node error stating that it could not find module '/app/hello.js'

Looking at the error message, it says that it couldn't find /app/hello.js. I could let the other team know and let them figure it out. However, I'd like to give them a bit more info and possible advice, so I could use the bash trick from before and see if I can spot the problem.

docker run -it deps:latest bash

Node error stating that it could not find module '/app/hello.js

Wait a minute! Why did I get the same error?

The reason is because the image has an ENTRYPOINT defined, which means that whenever the container starts, it's going to run that command. Since that command is the one that's failing, the container crashes before it executes bash or any other command.

The Solution

Since I don't have access to the Dockerfile, I need a way to change the entrypoint to allow it to run bash. Luckily, it turns out that the docker run command has a --entrypoint param that you can set to be whatever the command should be.

So let's run the container, except this time, specifying bash as the entrypoint.

docker run -it --entrypoint "bash" deps:latest

And if I run the ls command, I can see that the issue is that the file is called index.js, not hello.js.

There's not a file called hello.js, but it's called index.js

From here, I can give this information to the other team and they can make the necessary changes to their Dockerfile.