Skip to content

2024

Functional Design Patterns - Missing Values

When working in software, a common problem that we run into is what should we do if we can't get a value or compute an answer.

For example, let's say that you go to the library to checkout the latest book from Nathalia Holt, but when you search for it, you realize that they don't have any copies in stock.

How do you handle the absence of value? From an Object Oriented Programming (OOP), you might be able to leverage a reasonable default or the Null Object Pattern. Another approach would be to return to always return a list and if there are no values, then the list would be empty.

No matter the choice you make, we need some way to model the lack of a value. Tony Hoare referred to null as his billion dollar mistake as its inclusion in languages have created countless bugs and errors.

In most functional languages, the idea of null doesn't exist, so how do they handle missing values?

Revisiting Functions

In a previous post, we mention that a function is a mapping between two sets such that every element on the left (first set) is mapped to a single element on the right (the second set). So what happens when we have a mapping that's not a function?

In these cases, we have two solutions to choose from:

  1. Restrict the function input to only be values that can be mapped (e.g., restrict the domain)
  2. Expand the possible values that the function can map to (e.g., expand the range)

When refactoring a mapping, I tend to stick with option 1 because if we can prevent bad data from being created or passed in, that's less error handling that needs to be written and it becomes simpler to reason about. However, there are plenty of times where our type system (or business rules) can't enforce the valid inputs.

Number Math

Let's say that we're working on a "word calculator" where the user can enter a phrase like "one thousand twelve" and it returns "1012" as output. If we think about our input set (domain) and outputs, we would have the following (mapping from string to number).

Mapping from string to number

The issue is that we can't guarantee that the strings the user gives us would represent a number (for example, how would you convert "platypus" to a number?)

Since we can't restrict the input, we have to expand the output. So let's update our output to be number | 'invalid'

Mapping from string to number or 'invalid'

With this change, anytime we call convertToNumber, we'll need to add some handling on what to do if the output is invalid.

Here's some pseudocode of what this mapping could look like

type WordCalculatorResult = number | 'invalid'

function convertToNumber(phrase:string): WorldCalculatorResult {
    if (phrase === 'one') return 1;
    if (phrase === 'two') return 2;
    // ... other logic
    return 'invalid';
}

console.log(convertToNumber('one')); // prints "1";
console.log(convertToNumber('kumquats')) // prints invalid

Leaky Context

So far, so good as we have a function that converts words to numbers. Let's now say that we want to square a number. Writing such a function is straightforward.

1
2
3
function square (x:number): number {
    return x*x;
}

With both convertToNumber and square defined, let's try to combine these together:

Compilation error when combining convertToNumber and Square

This doesn't work because convertToNumber can return invalid which there's no way for the square function to work with that. Due to this limitation, we could rewrite the code to check for invalid before calling square.

1
2
3
4
5
const result = convertToNumber('one');

if (result !== 'invalid') {
    console.log(square(result));
}

This approach will work, but the downside is that we need to add error handling logic in our pipeline now. If this was the only place where it's being used, then it's not too bad. However, if we were using convertToNumber in multiple places, then we have to remember to add this error handling logic. In fact, we would have almost the same if check everywhere.

Let's take a look at a different way of solving this problem by using the Maybe type.

Introducing the Maybe Type

In our original approach, we already had a type called WordCalculatorResult for the computation. Instead of it being a number or invalid, we could instead create a new type, called Maybe that looks like the following:

type Maybe<T> = { label: 'some', value: T } | { label: 'none' }

By leveraging generics and tagged unions, we have a way to represent a missing value for any type. With this new definition, we can rewrite the convertToNumber function to use the Maybe type.

1
2
3
4
5
6
function convertToNumber(phrase:string): Maybe<number> {
    if (phrase === 'one') return {label:'some', value:1};
    if (phrase === 'two') return {label:'some', value:2};
    // other logic
    return {label:'none'};
}

I don't know about you, but typing out {label:'some'} is going to get tiring, so let's create two helper functions, some and none that handle putting the right label if we have a value or not.

1
2
3
4
5
6
7
function some<T>(value:T): Maybe<T> {
    return {label:'some', value};
}

function none<T>(): Maybe<T> {
    return {label:'none'};
}

Which allows us to replace all of the {label:'some', value:x} with some(x) and {label:'none'} with none().

1
2
3
4
5
6
function convertToNumber(phrase:string): Maybe<number> {
    if (phrase === 'one') return some(1);
    if (phrase === 'two') return some(2);
    // other logic
    return none();
}

Now that convertToNumber returns a Maybe, our pipeline would look like the following:

1
2
3
4
5
const result = convertToNumber('one');

if (result.label === 'some') { // checking if we're a Some type
    console.log(square(result.value));
}

Introducing Transformations With Map

Now the problem we're wanting to solve is to get rid of having to do an if check on some in order to call the square function. Given that we're wanting to transform our number into a different number, we can write another function, called map that takes two parameters: the Maybe we want transformed and the function that knows how to transform it. Let's take a look at how to write such a function.

1
2
3
4
5
6
7
function map<T,U>(m:Maybe<T>, f:(t:T)=>U): Maybe<U> {
    if (m.label === 'some') { // Is Maybe a Some?
        const transformedValue = f(m.value); // Grab the value and transform it by calling function f
        return some(transformedValue) // return a new Maybe with the transformed value
    }
    return none(); // Return none since there's no value to transform
}

With map implemented, let's update our code to use it!

const convertResult = convertToNumber('one'); // 
const transformedResult = map(convertResult, square);

if (transformedResult.label === 'some') { // checking if we're a Some type
    console.log(transformedResult.value);
}

/* The above could be simplified as the following
const result = map(convertToNumber('one'), square);
if (result.label === 'some') {
  console.log(result.value)
}
*/

Thanks to map and Maybe, we're able to write two pure functions (convertToNumber and square), glue them together, and delegate our error handling to one spot (instead of having to sprinkle this throughout the codebase).

Calling Side Affects Using Tee

With the addition of map, we've now simplified our if check to only call console.log now (instead of calling the square function then console.log). If we wanted to truly get rid of this if statement, then we need a way to invoke the console.log as a side effect. We don't want to combine console.log into square because square is business rules and we don't want to add side effect logic to our business rules.

Similar to what we did for map, let's add a new function, called tee, that takes in three arguments: a Maybe, a function to call if the Maybe has a value, and then a function to call if Maybe doesn't have a value.

Let's take a look at what an implementation would look like:

/*
 Inputs:
    m: A Maybe to work with
    ifSome: A function that takes in a T (which comes from the Maybe) and returns void
    ifNone: A function that takes no inputs and returns void
 Outputs:
    Returns m so that we can continue chaining calls
*/
function tee<T>(m:Maybe<T>, ifSome:(t:T)=>void, ifNone:()=>void): Maybe<T> {
    if (m.label === 'some') {
        ifSome(m.value);
    } else {
        ifNone();
    }
    return m;
}

With tee implemented, we can update our pipeline to take advantage of it:

const convertResult = convertToNumber('one'); // 
const transformedResult = map(convertResult, square);

const ifSome = (t:number) => console.log(`Result is ${t}`);
const ifNone = () => console.log('Failed to calculate result.');

tee(transformedResult, ifSome, ifNone);

// if we wanted to get rid of the temp values, we could do the following
// const result = tee(map(convertToNumber('one'), square)), ifSome, ifNone);

Cleaning Up by using an Object

We've now centralized the if logic to the functions that work on Maybes which is helpful, but as we're already seeing, the code can be hard to parse as you need to read from the outside in (which is very Clojure or Scheme in nature).

Instead of building up functions this way, we can instead create a TypeScript class and use a fluent interface pattern by doing the following:

export class Maybe<T>
{
  private constructor(private readonly value:T|undefined=undefined){}
  map<U>(f:(t:T)=>U): Maybe<U> {
    return this.value !== undefined ? Maybe.some(f(this.value)) : Maybe.none();
  }
  tee(ifSome:(t:T)=>void, ifNone:()=>void) {
    if (this.value !== undefined) {
      ifSome(this.value);
    } else {
      ifNone();
    }
    return this;
  }
  static some<T>(value:T): Maybe<T> {
    return new Maybe(value);
  }
  static none<T>(): Maybe<T> {
    return new Maybe();
  }
}
// Defined in index.ts
function convertToNumber(phrase:string): Maybe<number> {
    if (phrase === 'one') return Maybe.some(1);
    if (phrase === 'two') return Maybe.some(2);
    // other logic
    return Maybe.none();
}

function square(x:number): number {
    return x*x;
}

// By using the fluent interface pattern, we can chain methods together and have this code read
// more naturally
convertToNumber("one")
  .map(square)
  .tee(
    (v)=> `Answer is ${v}`, 
    () => "Unable to calculate result"
  );

Wrapping Up

When writing software, a common problem we run into is what to do when we can't calculate a value or a result. In this post, we looked at techniques to restrict the function's input or expand their output to handle scenarios. From there, we looked into a type called Maybe and how it can represent the absence of value, but still provide a way to remove having to explicitly write error handling code by consolidating the checks into a map call. Lastly, we look into taking the various functions that we wrote and combine them into a formal Maybe class that allows us to leverage a fluent interface and chain our interactions together.

Functional Design Patterns - Reduce and Monoids

When learning about a loops, a common exercise is to take an array of elements and calculate some value based on them. For example, let's say that we have an array of numbers and we want to find their sum. One approach would be the following:

1
2
3
4
5
const values = [1, 2, 3, 4, 5];
let total = 0;
for (const val in values) {
    total += val;
}

This code works and we'll get the right answer, however, there's a pattern sitting here. If you find yourself writing code that has this shape (some initial value and some logic in the loop), then you have a reducer in disguise.

Let's convert this code to use the reduce function instead.

const values = [1, 2, 3, 4, 5];
const total = values.reduce((acc, curr) => acc + curr, 0);

The reduce function takes two parameters, the reducer and the initial value. For the initial value, this is the value that the result should be if the array is empty. Since we're adding numbers together, zero makes the most sense.

The reducer, on the other hand, says that if you give me the accumulated result so far (called acc in the example) and an element from the array (called curr in the above example), what's the new accumulation?

Reduce is an extremely powerful tool (in fact, I give a presentation where we rewrite some of C#'s LINQ operators as reduce).

But there's another pattern sitting here. If you find that the initial value and elements of the array have the same type, you most likely have a monoid.

Intro to Monoids

The concept of Monoid comes from the field of category theory where a Monoid contains three things

  1. A set of values (you can think about this as a type)
  2. Some binary operation (e.g., a function that takes two inputs and returns a single output of said type)
  3. Some identity element such that if you pass the id to the binary operation, you get the other element back.

There's a lot of theory sitting here, so let's put it in more concrete terms.

If we have a Monoid over the a type A, then the following must be true:

1
2
3
4
5
6
7
8
function operation<A>(x:A, y:A): A 
{
    // logic for operation
}
const identity: A = // some value of type A
operation(x, identity) === operation(identity, x) === x

operation(x, operation(y, z)) === operation(operation(x, y), z)

Here's how we would define such a thing in TypeScript.

1
2
3
4
export interface Monoid<A>{
    operation: (x:A, y:A)=> A;
    identity:A;
}

Exploring Total

With more theory in place, let's apply it to our running total from before. It turns out that addition forms a Monoid over positive numbers with the following:

1
2
3
4
const additionOverNumber: Monoid<number> = {
    identity: 0,
    operation: (a:number, b:number) => a+b;
}

Wait a minute... This looks like what the reduce function needed from before!

In the case where we have a Monoid, we have a way of reducing an array to a single value for free because of the properties that Monoid gives us.

Exploring Booleans

Thankfully, we're not limited to just numbers. For example, let's take a look at booleans with the && and || operators.

In the case of &&, that's our operation, so now we need to find the identity element. In other words, what value must id be if the following statements are true?

1
2
3
const id: boolean = ...
id && true === true
id && false === false

Since id has to be a boolean, the answer is true. Therefore, we can define our Monoid like so

1
2
3
4
const AndOverBoolean: Monoid<boolean> = {
    identity: true,
    operation: (a:boolean, b:boolean) => a && b
}

With this monoid defined, let's put it to use. Let's say that we wanted to check if every number in a list is even. We could write the following:

1
2
3
const values = [1, 2, 3, 4, 5];

const areAllEven = values.map(x=>x%2===0).reduce(AndOverBoolean.operation, AndOverBoolean.identity);

Huh, that looks an awful lot like how we could have used every.

const values = [1, 2, 3, 4, 5];
const areAllEven = values.every(x=>x%2===0);

Let's take a look at the || monoid now. We have the operation, but we now we need to find the identity. In other words, what value must id be if the following statements are true?

1
2
3
const id: boolean = ...;
id || true === true
id || false === false

Since id has to be a boolean, the answer is false. Therefore, we can define our monoid as such.

1
2
3
4
const OrOverBoolean: Monoid<boolean> = {
    identity: false,
    operation: (x:boolean, y:boolean) => x || y
} 

With this monoid defined, let's put it to use. Let's say that we wanted to check if some number in a list is even. We could write the following:

1
2
3
const values = [1, 2, 3, 4, 5];

const areAllEven = values.map(x=>x%2===0).reduce(OrOverBoolean.operation, OrOverBoolean.identity);

Similar to the AndOverBoolean, this looks very similar to the code we would have written if we had leveraged some.

1
2
3
const values = [1, 2, 3, 4, 5];

const isAnyEven = values.some(x => x%2 === 0);

Wrapping Up

When working with arrays of items, it's common to need to reduce the list down to a single element. You can start with a for loop and then refactor to using the reduce function. If the types are all the same, then it's likely that you also have a monoid, which can give you stronger guarantees about your code.

Today I Learned - Leveraging Mock Names with Jest

I was working through the Mars Rover kata the other day and found myself in a predicament when trying to test one of the functions, the convertCommandToAction function.

The idea behind the function is that based on the Command you pass in, it'll return the right function to call. The code looks something like this.

type Command = 'MoveForward' | 'MoveBackward' | 'TurnLeft' | 'TurnRight' | 'Quit'
type Action = (r:Rover):Rover;

const moveForward:Action = (r:Rover):Rover => {
  // business rules
}
const moveBackward:Action = (r:Rover): Rover => {
  // business rules
}
const turnLeft:Action = (r:Rover):Rover => {
  // business rules
}
const turnRight:Action = (r:Rover): Rover => {
  // business rules
}
const quit:Action = (r:Rover):Rover => {
  // business rules
}

// Function that I'm wanting to write tests against.
function convertCommandToAction(c:Command): Action {
  switch (c) {
    case 'MoveForward': return moveForward;
    case 'MoveBackward': return moveBackward;
    case 'TurnLeft': return turnLeft;
    case 'TurnRight': return turnRight;
    case 'Quit': return quit;
  }
}

I'm able to write tests across all the other functions easily enough, but for the convertCommandToAction, I needed some way to know which function is being returned.

Since I don't want the real functions to be used, my mind went to leveraging Jest and mocking out the module that the actions were defined in, yielding the following test setup.

import { Command } from "./models";
import { convertCommandToAction, convertStringToCommand } from "./parsers";

jest.mock("./actions", () => ({
  moveForward: jest.fn(),
  moveBackward: jest.fn(),
  turnLeft: jest.fn(),
  turnRight: jest.fn(),
  quit: jest.fn(),
}));

describe("When converting a Command to an Action", () => {
  it("and the command is MoveForward, then the right action is returned", () => {
    const result = convertCommandToAction("MoveForward");

    // What should my expect be?
    expect(result);
  });
});

One approach that I have used in the past is jest's ability to test if a function is a mocked function, however, that approach doesn't work here because all of the functions are being mocked out. Meaning, that my test would pass, but if I returned moveBackward instead of moveForward, my test would still pass (but now for the wrong reason). I need a way to know which function was being returned.

Doing some digging, I found that the jest.fn() has a way of setting a name for a mock by leveraging the mockName function. This in turn allowed me to change my setup to look like this.

1
2
3
4
5
6
7
jest.mock("./actions", () => ({
  moveForward: jest.fn().mockName('moveForward'),
  moveBackward: jest.fn().mockName('moveBackward'),
  turnLeft: jest.fn().mockName('turnLeft'),
  turnRight: jest.fn().mockName('turnRight'),
  quit: jest.fn().mockName('quit'),
}));
Note: It turns out that the mockName function is part of a fluent interface, which allows it to return a jest.Mock as the result of the mockName call

With my setup updated, my tests can now check that the result has the right mockName.

1
2
3
4
5
6
7
8
9
describe("When converting a Command to an Action", () => {
  it("and the command is MoveForward, then the right action is returned", () => {

    // have to convert result as a jest.Mock to make TypeScript happy
    const result = convertCommandToAction("MoveForward") as unknown as Jest.Mock;

    expect(result.getMockName()).toBe("moveForward");
  });
});

Wrapping Up

If you find yourself writing functions that return other function (i.e., leveraging functional programming concepts), then you check out using mockName for keeping track of which functions are being returned.

An Alternative Approach to Sprint Planning - Introducing Hippogriff

Background

One of my first jobs in engineering was working for a medical device company. This is was pretty cool as I wrote software that interacted with the device to show measurements and give recommendations to the physicians about those measurements.

To ensure that things were working correctly, we had a department called V&V (Validation and Verification). I had never heard of this term before in my career, so my boss told me that it was responsible for ensuring that we built the right thing because it's engineering's job to build it right.

These two principles (build the right thing and build it right) have stuck with me during my career. So much so, I believe this may have been the start of my interest in product engineering and wanting to understand the "why" behind the stuff that I build.

It was the same job that I was introduced to the concept of Kanban, and the idea of eliminating waste from our process as it inherently slows us down by focusing on the wrong things. I'm known to be a process improver at heart and the idea of Kaizen (continuous improvement) resonates with me.

So when I think about how many teams are tackling work breakdown and estimating, I can't help but think that we're spending our time on the wrong things and not enough on the right things.

My Experiences with Engineering Teams Today

A common piece of feedback with teams that are following Scrum principles is that they feel like they are in meetings all the time and there's no time to do the actual work. As someone who has a love/hate relationship with meetings, this is totally understandable.

There are a set number of meetings ("activities" in Scrum parlance) that teams follow, one of which is sprint planning, a time to make sure that what the team is working on is aligned with priorities from product.

While I find value in the synchronization with product, I don't find very much value in the estimation portion of the meeting as it gets the focus on the wrong things.

If we look at the value that estimates provide, the goal is to ensure that the team is taking on a reasonable load for the sprint (e.g., what are our commitments) and not over-extending as that can cause burn-out.

I have two problems with estimations. First, they're way too easy to be taken as a deadline (so much so, that you've probably seen this meme spread around). Which in turn, causes the team to go deeper into estimating stories (breaking them down to single tasks), which causes a feedback loop.

If this keeps up, you'll eventually find yourself going into Waterfall mode where the team needs every single requirement up front before they can do any development, the opposite of what we're trying to do.

Instead of the team focusing on estimations, I'd much rather see them strategize on an approach to the problem and let that be the driver to the work.

One approach that I've seen teams take is to use "relative sizing", so instead of saying that work will take four hours, they might say that it's a "2" (if using the Fibonacci approach) or it's a "Small" (if using T-Shirt sizing).

Side Note: I've also seen fruit and animals as relative sizing options.

The problem I have with relative sizing is that it can be helpful for the team, it's total nonsense for business stakeholders. For example, let's say that we're working on the new Shiny Widget 9000 and Marketing wants to know when it's gong to be ready so that they can start getting the marketing materials ready and start promotion. You can't tell them that it's going to take 108 story points or 16 Mediums as that is meaningless.

They need a timeline, so what does engineering do? They look up historic trends for the team and say something like "We typically get 4 Mediums done in 2 weeks, so we're looking at 2 months, give or take". Which means that we're correlating a relative size to a unit of time. So what value did we gain?

When I think about planning, I'm focusing on stories that are independent, deliver value, and can be accomplished within a sprint. I don't necessarily care if the story is a 2, 3, or 5 as long as the team has a rough approach to the work and understands why we need the functionality.

A Different Approach

Instead of spending time on estimates, what if we approached planning this way?

  1. Product and Engineering work together to break priorities down to smaller items that can be accomplished in a sprint (remembering to keep them independent).
  2. Team works together to move the stories in priority order (taking into account dependencies).
  3. Team takes on the first item.
  4. Product and Engineering can move/re-prioritize items as needed, but can't modify work in flight.
  5. As an item is completed, the team takes on the next important item.

This focuses the meeting on the important things (what's important for us to work on, how would we approach it) and less on the things that don't matter (fine-tuned estimations).

For those keeping track, this sounds pretty similar to Kanban (which it absolutely is 😀), but if you're looking for a catchy name, you can call this the Highest Priority Goes First ( or Hippogriff) framework.

Long story short, I'd like to see teams spending less time on estimations and more time on figuring out what the problem is, solutioning, getting feedback, and iterating.

Isn't that what the Agile Manifesto is all about?

While working on this post, I found this article by Mountain Goat Software to be immensely helpful in capturing the goals of sprint planning and some external validation that I'm not the only one who's experienced this problem as well.

Today I Learned: Changing the Entrypoint for a Docker Container

A common pattern for building software today is to develop using containers. This is a great strategy because it can ensure that everyone is building/deploying the same environment every time. However, like any other tooling, it can take a bit to create the file and make sure that you're setting it up correctly.

When I'm building a container, I typically will run bash in the container so I can inspect files, paths, permissions and more.

For example, let's say that I wanted to see what all is in the node image, I could run the following

docker run -it node:22 bash

And as long as the node image has bash installed, I can take a look.

This trick works just fine when I have access to the Dockerfile and can change it. But what if you didn't have access to the Dockerfile? How would you troubleshoot?

The Scenario

Let's say that I'm using a container that was created from another team, called deps. When I try to run it though, I get the following error:

docker run -it deps:latest

Node error stating that it could not find module '/app/hello.js'

Looking at the error message, it says that it couldn't find /app/hello.js. I could let the other team know and let them figure it out. However, I'd like to give them a bit more info and possible advice, so I could use the bash trick from before and see if I can spot the problem.

docker run -it deps:latest bash

Node error stating that it could not find module '/app/hello.js

Wait a minute! Why did I get the same error?

The reason is because the image has an ENTRYPOINT defined, which means that whenever the container starts, it's going to run that command. Since that command is the one that's failing, the container crashes before it executes bash or any other command.

The Solution

Since I don't have access to the Dockerfile, I need a way to change the entrypoint to allow it to run bash. Luckily, it turns out that the docker run command has a --entrypoint param that you can set to be whatever the command should be.

So let's run the container, except this time, specifying bash as the entrypoint.

docker run -it --entrypoint "bash" deps:latest

And if I run the ls command, I can see that the issue is that the file is called index.js, not hello.js.

There's not a file called hello.js, but it's called index.js

From here, I can give this information to the other team and they can make the necessary changes to their Dockerfile.

My Experience Preparing for the Azure Administrator Associate (AZ-104) Exam

There's been a bit of a lull the past couple of weeks on the blog as I've been focusing my time on studying and preparing for the AZ-104 exam. This was a particularly challenging certificate for me as I don't have a traditional IT Admin background so I had to not only shore up the gaps in that knowledge, but then also had to learn how to model similar concepts in Azure.

That being said, I was able to pass the exam on my first take and wanted to share some advice for those who are looking to take this or other Azure exams.

Expand your studying outside of the Microsoft Learn documentation

The Microsoft Learn docs are fine for doing a deep dive into a subject, but if it's the first time learning a concept, then they can be a bit rough as they assume you have knowledge that you might not. To help round out your learning, I recommend finding other resources like videos, books, or articles.

Build and Experiment in Azure

Given that most of these concepts are pretty abstract, I found that they stuck with me much more when I build out the resources. For example, when working with a Virtual Machine, all of its components need to be in the same region. You can either remember that text OR you know that has to be true because if you try building out the VM in Azure and try to change components, it's going to fail.

Don't Rely Solely on Practice Exams

Back in 2019, I was studying/preparing for the 483 (exam on C#) and the advice at the time was to go over the practice exams over and over again until things stuck. Following the same advice, I took tons of practice exams (through MS Learn and MeasureUp) and though they might have had the same format as the exam (multiple choice, drag-and-drop, etc...), none of them were a good stand-in for the real exam.

The reason being is that the exam questions likely won't ask you to define a term, but are more likely to be along the lines of how you'd solve a problem (which expects you to know the terminology inherently). So if you don't have the underlying knowledge, you're going to have a bad time trying to answer the questions.

Where the practice exams shone was helping me identify areas that I needed to focus more on. For example, if I struggled in the Networking section, then I know I needed to revisit concepts there. This helped me make the most of my studying time.

Using Generative AI To Help Understand Concepts

Given that I don't come from a networking/IT background, there were some concepts that were quite confusing to me. For example, I was trying to understand why I would need System routes if we already had Network Security Groups and Copilot was able to give me the following:

Asking Copilot why I would need system routes

To help make sure I didn't fall victim to hallucinations, I followed up on the links that Copilot provided to make sure that I understood the concepts, but given that I learn best by asking questions, this was a major win for me since you can't ask questions to books/videos.

Resources That Helped Me

For those looking to study up on this exam, I had success using these resources. Note: I do not receive compensation for these recommendations.

Today I Learned - Using TypeSpec to Generate OpenAPI Specs

Recently, I was doing analysis for a project where we needed to build out a set of APIs for consumers to use. Even though I'm a big believer of iterative design, we wanted to have a solid idea of what the routes and data models were going to look like.

In the past, I most likely would have generated a .NET Web API project, created the controllers/models, finally leveraging NSwag to generate Swagger documentation for the api. Even though this approach works, it does take more time on the implementation side (spinning up controllers, configuring ASP.NET, creating the models, adding attributes). In addition, if the actual API isn't being written in with .NET, then this code becomes throwaway pretty quickly.

Since tooling is always evolving, I stumbled across another tool, TypeSpec. Heavily influenced by TypeScript, this allows you to write your contracts and models that, when compiled, produces an OpenAPI compliant spec.

As a bonus, it's not restricted to just API spec as it has support for generating JSON schemas and gRPC's Protocol Buffers (protobuf)

Getting Started

All code for this post can be found on my GitHub.

Given that it's inspired by TypeScript, the tooling requires having Node installed (at least 20, but I'd recommend the long-term-supported (LTS) version).

From there, we can install the TypeSpec tooling with.

npm install @typespec/compiler

Even though this is all the tooling that's required, I'd recommend installing an extension for either Visual Studio or Visual Studio Code so that you can get Intellisense and other visual cues while you're writing the code.

Bootstrapping the project

Now that we've got the tooling squared away, let's create our project.

1
2
3
mkdir bookstore-api # let's make a directory to hold everything
cd bookstore-api
tsp init --template rest

Enter a project name and choose the defaults. Once it's finished bootstrapping, you can install necessary dependencies using tsp install.

Building Our First API

For our bookstore application, let's say that we want to have an inventory route where someone can retrieve information about a book.

For this work, I'm picturing the following

# Route -> GET api/inventory/{id}
# Returns 200 or 404

In the project, locate the main.tsp file and add the following

1
2
3
4
5
6
7
8
9
using TypeSpec.Http;
using TypeSpec.Rest;

@service({
    title: "Bookstore Service"
})
namespace Bookstore {

}

After adding this code, run tsp compile . (note the period). This will create a file in the tsp-output/@typespec/openapi3 folder, openapi.yaml.

We can open that file and see what our OpenAPI spec looks like

1
2
3
4
5
6
7
openapi: 3.0.0
info:
  title: Bookstore Service
  version: 0.0.0
tags: []
paths: {}
components: {}

So far, not much to look at. However, if we copy this code and render feed it to an online render (like https://editor.swagger.io/), we'll get a message about no operations.

Swagger.io saying there are no operations

Let's change that by building out our GET endpoint.

Back in main.tsp, let's add more code to our Bookstore namespace.

1
2
3
4
5
6
7
namespace Bookstore {
    @tag("Inventory")
    @route("inventory")
    namespace Inventory {
        @get op getBook(@path bookId:string): string
    }
}

After running tsp compile ., we'll see that our yaml has been updated and if we render it again, we'll have our first endpoint

Swagger.io rendering inventory by ID route

This is closer to what we want, however, we know that we're returning back a string, but a Book.

For this exercise, we'll say that a Book has the following:

  • an id (of number)
  • a title (of string)
  • a price (of number, minimum 1)
  • author name (of string)

Let's add this model to main.tsp

namespace Bookstore {
    // Note that we've added this to the Bookstore namespace
    model Book {
        id: string;
        title: string;

        @minValue(1)
        price: decimal;

        authorName: string;
    }
    @tag("Inventory")
    @route("inventory")
    namespace Inventory {

        // For our get, we're now returning a Book, instead of a string.
        @get op getBook(@path bookId: string): Book; 
    }
} 

After another run of tsp compile and rendering the yaml file, we see that we have a schema for our get method now.

Swagger showing the updated model

Refactoring a Model

Even though this works, the Book model is a bit lazy as it has the authorName as a property instead of an Author model which would have name (and a bit more information). Let's update Book to have an Author property.

model Author {
    id: string;

    @minLength(1)
    surname: string;

    @minLength(1)
    givenName: string;
}
model Book {
    id: string;
    title: string;

    @minValue(1)
    price: decimal;

    author: Author;
}

After making this change, we can see that we now have a nested model for Book.

Swagger showing both Book and Author model

Handling Failures

We're definitely a step in the right direction, however, our API definition isn't quite done. Right now, it says that we'll always return a 200 status code.

I don't know about you, but our bookstore isn't good enough to generate books with fictitious IDs, so we need to update our contract to say that it can also return 404s.

Back in main.tsp, we're going to change our return type of the @get operation to instead of being a Book, it's actually a union type.

1
2
3
4
5
6
7
8
@get op getBook(@path bookId: string): 
// Either it returns a 200 with a body of Book
{
    @statusCode statusCode: 200;
    @body book: Book;
} | { // Or it will return a 404 with an empty body
    @statusCode statusCode: 404;
};

With this final change, we can compile and render the yaml and see that route can return a 404 as well.

Swagger showing that the route can return a 404 as well

Next Steps

When I first started with TypeSpec, my first thought was that you could put this code under continuous integration (CI) and have it produce the OpenAPI format as an artifact for other teams to pull in and auto-generate clients from.

If you're interested in learning more about that approach, drop me a line at the Coaching Corner and I may write up my results in a future post.

Leadership Playbook - Leading Through Change

There are few fundamental truths in life, one of which is that the only consistent thing is change. Whether that's through a reorganization, someone leaving the team, or the start of a new initiative, we know that what happens today is different from yesterday, and as a leader, the team is looking to you to figure out how to navigate these changes.

In this post, I'll share some tips and tricks for leading the team through change.

Explain the Reasoning

A common mistake I see leaders make is introducing a change without discussing why the change is happening. Let's look at a hypothetical situation where you're introducing the team to pull request templates.

Hey team! Starting next sprint, we're going to start using this new PR template. You can find it ....

On the one hand, the message is clear on what's changing (using new PR template). However, it completely missed the point, why are we making this change? When we don't include the why, we catch people off-guard because they may not immediately understand the problem that the change is supposed to solve.

When we put people in an information vacuum, they draw their own conclusions, which can give the wrong impression behind the change. This in turn, can cause the rumor mill to go into overdrive, making your job much harder.

Let's revisit the same scenario but include the "why" this time.

Hey team! During our last retro, it was brought up that our pull requests descriptions aren't consistent, which makes reviewing them more difficult. To help build consistency, we're going to start using pull request templates. You can find it ....

By making this small change, we can squash misinformation and potential rumor mills because we're clear on the reasoning.

Which is a great segue to...

Be Transparent With the Team

When explaining the reason, don't lie or sugarcoat the reasoning, even if it makes you feel uncomfortable. Your team is smart and they'll know if you're lying to them.

A hot topic nowadays are Return to Office (RTO) plans, with a common reason being "need more in-person collaboration." Even though this provides a "why," it's not backed by metrics or anything measurable. In addition, in-person doesn't necessarily mean more collaboration.

A better approach is to use employee engagement surveys or customer satisfaction surveys to measure the effectiveness of team or company. If you can't use these metrics (or other relevant metrics), then why are you introducing this change?

Going back to our hypothetical RTO, let's say that the reason we're going back to the office is that we're a start-up whose investors have already paid for our space. To them, having people in the space helps them feel better that their money is well spent (in addition, some companies have a tax break if they moved to your state as long as they have a percentage of people on-site).

This is what you should be telling your team. They may not like the reason and they might disagree, but they know the why and then they can make their own decisions. Long story short, you're giving them the autonomy to make their own decisions because they have all the information.

Repetition

Humans don't have perfect memories, right? So why would we expect that once we introduce a change that's the last time we need to talk about it?

Regardless of the change, don't be surprised if you need to mention it 3 or 4 times before people finally start understanding and applying the change. A great mentor of mine once told me that he would tell people about the change time and time again until it stuck. While this was happening, he would show patience, repeat the messaging, and answer questions and concerns.

When we hear of change, our first step is start processing the message and the immediate impacts. Some people will have questions immediately while others need time to stew on it.

Because of this, be available to answer questions as they come up (even if you've already answered them before). Be prepared for questions during one-on-ones, after meetings, or whenever they come up.

Acknowledge the questions and answer them. If you don't know the answer, tell them that you don't know and that you're going to find the answer.

Giving Space

Don't be surprised that your team exhibits a wide range of emotions for more significant changes.

For example, if your company is doing layoffs, then it's reasonable for people to be upset, depressed, or mentally checked out.

When this happens, you have to give people space to process. This doesn't mean isolating them but being aware that they need some time and accommodate accordingly.

A common mistake I see leaders make here is introducing a change, thinking it was low impact, so they start making other changes. In reality, this change had a high impact, and now the team is under pressure to handle the original change and whatever new commitments are coming their way.

Not only does this reduce your odds of success, but this prevents your team from dealing with the changes, which can turn into stress or frustration. If this happens enough times, people will change teams (or even jobs) just to get a change of scenery and be able to process.

Reducing Change Fatigue

Even though changes are going to happen, do not introduce one change this week, another next week, and then one more two weeks later.

As engineers, we learned that refactoring a codebase should be done in small steps to prevent functionality from breaking.

While this works great for code, this is terrible advice for humans.

When frequent changes happen, it becomes difficult to get into a rhythm with the work and the team, reducing the effectiveness of the team.

When we get changes happening like this, it becomes difficult to get into a rhythm with the work and the team. Especially if large changes keep happening every few weeks.

In the current landscape, we're seeing companies go through multiple rounds of layoffs. While this may keep them out of the news (no one reports that a company laid off ten people, even if it's the fourth time it's happened this year), it causes a feeling of dread for the survivors, as now they're thinking when they'll be next.

Instead of having multiple layoffs, if companies had one (albeit larger) layoff, this would allow people to have time to adjust and proceed without as much paranoia.

Wrapping Up

Navigating change is hard - especially when you aren't just going through the change yourself, but also leading others at the same time.

The next time you're going through a change, try starting here and see how it goes

  • Explain the "why" behind the change.
  • Be transparent about the changes, even if it's uncomfortable.
  • Be mindful that everyone processes change differently and may feel a greater impact or need more time to recover from teh change.

By starting here, your chances of moving forward without significant disruption will improve dramatically.

Today I Learned: Validating Data in Zod

Validating input. You've got to do it, otherwise, you're going to be processing garbage, and that never goes well, right?

Whether it's through the front-end (via a form) or through the back-end (via an API call), it's important to make sure that the data we're processing is valid.

Coming from a C# background, I was used to ASP.NET Web Api's ability to create a class and then use the FromBody attribute for the appropriate route to ensure the data is good. By using this approach, ASP.NET will reject requests automatically that don't fit the data contract.

However, picking up JavaScript and TypeScript, that's not the case. At first, this surprised me because I figured that this would automatically happen when using libraries like Express or Nest.js. Thinking more about it, though, it shouldn't have surprised me. ASP.NET can catch those issues because it's a statically typed/ran language. JavaScript isn't and since TypeScript types are removed during the compilation phase, neither is statically typed at runtime.

When writing validations, I find zod to be a delightful library to leverage. There are a ton of useful built-in options, you can create your own validators (which you can then compose!) and you can infer models based off of your validations.

Building the Amazin' Bookstore

To demonstrate some of the cool things that you can do with Zod, let's pretend that we're building out a new POST endpoint for creating a new book. After talking to the business, we determine that the payload for a new book should look like this:

1
2
3
4
5
6
// A valid book will have the following
// - A non-empty title
// - A numeric price (can't be negative or zero)
// - A genre from a list of possibilities (mystery, fantasy, history are examples, platypus would not be valid)
// - An ISBN which must be in a particular format
// - A valid author which must have a first name, a last name, and an optional middle name

What's in a Name?

Since the Book type needs a valid Author, let's build that out first:

1
2
3
4
5
import {z} from "zod";

export const AuthorSchema = z.object({

});

Since Author will need to be an object, we'll use z.object to signify that. Right off the bat, this prevents a string, number, or other primitive types from being accepted.

1
2
3
AuthorSchema.safeParse("someString"); // will result in a failure
AuthorSchema.safeParse(42); // will result in a failure
AuthorSchema.safeParse({}); // will result in success!

This is a great start, but we know that Author has some required properties (like a first name), so let's implement that by using z.string()

1
2
3
export const AuthorSchema = z.object({
    firstName: z.string()
});

With this change, let's take a look at our schema validation

1
2
3
AuthorSchema.safeParse({}); // fails because no firstName property
AuthorSchema.safeParse({firstName:42}); // fails because firstName is not a string
AuthorSchema.safeParse({firstName: "Cameron"}); // succeeds because firstName is present and a string

However, there's one problem with our validation. We would allow an empty firstName

AuthorSchema.safeParse({firstName:""}); // succeeds, but should have failed :(

To make our validation stronger, we can update our firstName property to have a minimum length of 1 like so.

1
2
3
export const AuthorSchema = z.object({
    firstName: z.string().min(1)
});

Finally, we have a way to enforce that an author has a non-empty firstName!. Looking at the requirements, it seems like lastName is going to be similar, so let's update our AuthorSchema to include lastName.

1
2
3
4
export const AuthorSchema = z.object({
    firstName: z.string().min(1),
    lastName: z.string().min(1)
});

Hmmm, it looks like we have the same concept in multiple places, the idea of a non empty string. Let's refactor that to its own schema.

1
2
3
4
5
6
export const NonEmptyStringSchema = z.string().min(1);

export const AuthorSchema = z.object({
    firstName: NonEmptyStringSchema,
    lastName: NonEmptyStringSchema
});

Nice! We're almost done with Author, we need to implement middleName. Unlike the other properties, an author may not have a middle name. In this case, we're going to leverage the optional function from zod to signify that as so.

1
2
3
4
5
6
7
8
9
export const NonEmptyStringSchema = z.string().min(1);

export const AuthorSchema = z.object({
    firstName: NonEmptyStringSchema,
    lastName: NonEmptyStringSchema,
    // This would read that middleName may or not may be present. 
    // If it is, then it must be a string (could be empty)
    middleName: z.string().optional(), 
});

With the implementation of AuthorSchema, we can start working on the BookSchema.

Judging a Book By It's Cover

Since we have AuthorSchema, we can use that as our start as so:

1
2
3
export const BookSchema = z.object({
    author: AuthorSchema
});

We know that a book must have a non-empty title, so let's add that to our definition. Since it's a string that must have at least one character, we can reuse the NonEmptyStringSchema definition from before.

1
2
3
4
export const BookSchema = z.object({
    author: AuthorSchema,
    title: NonEmptyStringSchema
});

Putting a Price on Knowledge

With title in place, let's leave the string theory alone for a bit and look at numbers. In order for the bookstore to function, we've got sell books for some price. Let's use z.number() and add a price property.

1
2
3
4
5
export const BookSchema = z.object({
    author: AuthorSchema,
    title: NonEmptyStringSchema,
    price: z.number()
});

This works, however, z.number() will accept any number, which includes numbers like 0 and -5. While those values would be great for the customer, we can't run our business that way. So let's update our price to only include positive numbers, which can be accomplished by leveraging the positive function.

1
2
3
4
5
export const BookSchema = z.object({
    author: AuthorSchema,
    title: NonEmptyStringSchema,
    price: z.number().positive()
});

With price done, let's look at validating the genre.

Would You Say It's a Mystery or History?

Up to this point, all of our properties have been straightforward (simple strings and numbers). However, with genre, things get more complicated because it can only be one of a particular set of values. Thankfully, we can define a GenreSchema by using z.enum() like so.

export const GenreSchema = z.enum(["Fantasy", "History", "Mystery"]);

With this definition, a valid genre can only be fantasy, history, or mystery. Let's update our book definition to use this new schema.

1
2
3
4
5
6
export const BookSchema = z.object({
    author: AuthorSchema,
    title: NonEmptyStringSchema,
    price: z.number().positive(),
    genre: GenreSchema
});

Now, someone can't POST a book with a genre of "platypus" (though I'd enjoy reading such a book).

ID Please

Last, let's take a look at implementing the isbn property. This is interesting because ISBNs can be in one of two shapes: ISBN-10 (for books pre-2007) and ISBN-13 (all other books).

To make this problem easier, let's focus on the ISBN-10 format for now. A valid value will be in the form of #-###-#####-# (where # is a number). Now, you can take this a whole lot further, but we'll keep on the format.

Now, even though zod has built-in validators for emails, ips, and urls, there's not a built-in one for ISBNs. In these cases, we can use .refine to add our logic. But this is a good use case for a basic regular expression. Using regex101 as a guide, we end up with the following expression and schema for the ISBN.

const isbn10Regex = /^\d-\d{3}-\d{5}-\d/;
export const Isbn10Schema = z.string().regex(isbn10Regex);

Building onto that, an ISBN-13 is in a similar format, but has the form of ###-#-##-######-#. By tweaking our regex, we end up with the following:

const isbn13Regex = /^\d{3}-\d-\d{2}-\d{6}-\d/;
export const Isbn13Schema = z.string().regex(isbn13Regex);

When modeling types in TypeScript, I'd like to be able to do something like the following as this makes it clear that an ISBN can in one of these two shapes.

1
2
3
type Isbn10 = string;
type Isbn13 = string;
type Isbn = Isbn10 | Isbn13;

While we can't use the | operator, we can use the .or function from zod to have the following

1
2
3
4
5
6
const isbn10Regex = /^\d-\d{3}-\d{5}-\d/;
export const Isbn10Schema = z.string().regex(isbn10Regex);
const isbn13Regex = /^\d{3}-\d-\d{2}-\d{6}-\d/;
export const Isbn13Schema = z.string().regex(isbn13Regex);

export const IsbnSchema = Isbn10Schema.or(Isbn13Schema);

With the IsbnSchema in place, let's add it to BookSchema

1
2
3
4
5
6
7
export const BookSchema = z.object({
    author: AuthorSchema,
    title: NonEmptyStringSchema,
    price: z.number().positive(),
    genre: GenreSchema
    isbn: IsbnSchema
});

Getting Models for Free

Lastly, one of the cooler functions that zod supports is infer where if you pass it a schema, it can build out a type for you to use in your application.

export const BookSchema = z.object({
    author: AuthorSchema,
    title: NonEmptyStringSchema,
    price: z.number().positive(),
    genre: GenreSchema
    isbn: IsbnSchema
});

// TypeScript knows that Book must have an author (which has a firstName, lastName, and maybe a middleName)
// a title (string), a price (number), a genre (string), and an isbn (string).
export type Book = z.infer<typeof BookSchema>; 

Full Solution

Here's what the full solution looks like

const NonEmptyStringSchema = z.string().min(1);
const GenreSchema = z.enum(["Fantasy", "History", "Mystery"]);

export const AuthorSchema = z.object({
  firstName: NonEmptyString,
  lastName: NonEmptyString,
  middleName: z.string().optional(),
});

export const Isbn10Schema = z.string().regex(/^\d-\d{2}-\d{6}-\d/);
export const Isbn13Schema = z.string().regex(/^\d{3}-\d-\d{2}-\d{6}-\d/);
export const IsbnSchema = Isbn10Schema.or(Isbn13Schema);

export const BookSchema = z.object({
  title: NonEmptyString,
  author: AuthorSchema,
  price: z.number().positive(),
  genre: GenreSchema,
  isbn: IsbnSchema,
});

export type Book = z.infer<typeof BookSchema>;

With these schemas and models defined, we can leverage the safeParse function to see if our input is valid.

describe('when validating a book', () => {
    it("and the author is missing, then it's not valid", () => {
        const input = {title:"best book", price:200, genre:"History", isbn:"1-23-456789-0"}

        const result = BookSchema.safeParse(input);

        expect(result.success).toBe(false);
    });
    it("and all the fields are valid, then the book is valid", () => {
        const input = {
            title:"best book", 
            price:200, 
            genre:"History", 
            isbn:"1-23-456789-0", 
            author: {
                firstName:"Super", 
                middleName:"Cool", 
                lastName:"Author"
            }
        };

        const result = BookSchema.safeParse(input);

        expect(result.success).toBe(true);
        const book:Book = result.data as Book;
        // now we can start using properties from book
        expect(book.title).toBe("best book");
    });
});

Coaching Corner Volume 5

Welcome to Cameron's Coaching Corner, where we answer questions from readers about leadership, career, and software engineering.

In this post, we'll look at a question from TheRefsAlwaysWin about how to get a new engineer on their team to open up and get comfortable asking for help.

I've got a newish member to the team and they're still early in their career. They've got a good head on their shoulders, however, they tend to go down rabbit holes when problem solving and they don't speak up or ask for help.

They've asked me a few times about "how did I know ....", and a lot of the times, it's experience (I've been doing this for a few decades now).

How do I help them open up and ask more questions to the team and group?