Skip to content

2025

Creating a Node Project With TypeScript and Jest From Scratch

When teaching engineers about Node, I like to start with the bare bone basics and build from there. Though there is value in using tools to auto-scaffold a project (Vite for React applications, for example), there's also value in understanding how everything hangs together.

In this post, I'm going to show you how to create a Node project from scratch that has TypeScript configured, Jest configured, and a basic test suite in place.

Let's get started!

Note: Do you prefer learning via video? You can find this article on YouTube

Step 0 - Dependencies

For this project, the only tool you'll need is the Long Term Support (LTS) version of Node (as of this post, that's v22, but these instructions should hold regardless). If you're working with different Node applications, then I highly recommend using a tool to help you juggle the different versions of Node you might need (Your options are nvm if you're on Mac/Linux or Node Version Manager for Windows if you're on Windows).

Step 1 - Creating the Directory Layout

A standard project will have a layout like the following:

1
2
3
4
5
6
7
projectName
    \__src # source code for project is here
        \__ index.ts
        \__ index.spec.ts
    \__ package.json
    \__ package.lock.json
    \__ README.md

So let's go ahead and create that. You can do this manually or by running the following in your favorite terminal.

mkdir <projectName> && cd projectName
mkdir src

Step 2 - Setting up Node

With the structure in the right place, let's go ahead and create a Node application. The hallmark sign of a Node app is the package.json file as this has three main piece of info.

  1. What's the name of the application and who created it
  2. What scripts can I execute?
  3. What tools does it need to run?

You can always manually create a package.json file, however, you can generate a standard one by using npm init --yes.

Tip: By specifying the --yes flag, this will generate a file with default settings that you can tweak as needed.

Step 3 - Setting up TypeScript

At this point, we have an application, but there's no code or functionality. Given that we're going to be using TypeScript, we'll need to install some libraries.

Installing Libraries

In the project folder, we're going to install both typescript and a way to execute it, ts-node.

npm install --save-dev typescript ts-node

Note: With Node v24, you can execute TypeScript natively, but there are some limitations. For me, I still like using ts-node for running the application locally.

Setting up TypeScript

Once the libraries have been installed, we need to create a tsconfig.json file. This essentially tells TypeScript how to compile our TypeScript to JavaScript and how much error checking we want during our compilation.

You can always create this file manually, but luckily, tsc (the TypeScript compiler) can generate this for you.

In the project folder, we can run the following

npx tsc --init

Writing Our First TypeScript File

At this point, we have TypeScript configured, but we still don't have any code. This is when I'll write a simple index.ts file that leverages TypeScript and then try to run it with ts-node.

In the src folder, let's create an index.ts file write the following code.

1
2
3
4
5
export function add(a:number, b:number): number {
  return a+b;
}

console.log("add(2,2) =", add(2,2));

This uses TypeScript features (notice the type annotations), which if we try to run this with node, we'll get an error.

Using ts-node To Run File

Let's make sure everything is working by using ts-node.

Back in the terminal, run the following:

npx ts-node src/index.ts

If everything works correctly, you should see the following in the terminal window.

add(2,2) = 4

Adding Our First NPM Script

We're able to run our file, but as you could imagine, it's going to get annoying to always type out npx ts-node src/index.ts. Also, this kills discoverability as you have to document this somewhere (like a README.md) or it'll become a thing that someone "just needs to know".

Let's improve this by adding a script to our package.json file.

Back in setting up node, I mentioned that one of the cool things about package.json is that you can define custom scripts.

A common script to have defined is start, so let's update our package.json with that script.

1
2
3
4
5
6
7
{
  // some code here
  "scripts: {
    "start": "ts-node src/index.ts"
    // other scripts here
  },
}

With this change, let's head back to our terminal and try it out.

npm run start

If everything was setup correctly, we should see the same output as before.

Step 4 - Setting up Jest

At this junction, we can execute TypeScript and made life easier by defining a start script. The next step is that we need to set up our testing framework.

While there are quite a few options out there, a common one is jest so that's what we'll be using in this article.

Since jest is a JavaScript testing library and our code is in TypeScript, we'll need a way to translate our TypeScript to JavaScript. The jest docs mention a few ways of doing this (using a tool like Babel for example). However, I've found using ts-jest to be an easier setup and still get the same outcomes.

Installing Libraries

With our tools selected, let's go ahead and install them.

npm install --save-dev jest ts-jest @types/jest

Note: You might have seen that we're also installing "@types/jest". This does't provide any functionality, however, it does gives us the types that jest uses. Because of that, our code editor can understand @types/jest and give us auto-complete and Intellisense when writing our tests

Configuring Jest

So we have the tools installed, but need to configure them. Generally, you'll need a jest.config.js file which you can hand-write.

Or we can have ts-jest generate that for us :)

npx ts-jest config:init

If this step works, you should have a jest.config.js file in the project directory.

Let's write our first test!

Writing Our First Test

Jest finds tests based on file names. So as long as your test file ends with either .spec.ts, .spec.js, .test.ts, or .test.js, jest will pick it up.

So let's create a new file in the src folder called index.spec.ts and add the following:

import { add } from "./index"
// describe is container for one or more tests
describe("index", () => {
  // it - denotes this is a test
  it("does add work", () => {
    const result = add(2,2);

    expect(result).toBe(4);
  })
})

With our test written, we can run it in the terminal with the following:

npx jest

Adding a Test NPM Script

Just like when we added a custom start script to our package.json file, we can do a similar thing here with our tests.

In package.json, update the scripts section to look like:

1
2
3
4
5
6
7
8
{
  // other code
  "script": {
    "start": "ts-node src/index.ts",
    "test": "jest"
  },
  // other code
}

With this change, we can run our test suite by using npm run test in the terminal.

Congrats, just like that, you have a working Node application with TypeScript and Jest for testing!

Next Steps

With the scaffolding in place, you're in a solid spot to start growing things out. For example, you could...

  • Start setting up a Continuous Integration pipeline
  • Fine-tune your tsconfig.json to enable more settings (like turning off the ability to use any)
  • Fine-tune your jest.config.js (like having your mocks auto-rest)
  • Start writing application code!

Career Update and a New Chapter

A couple of months back, I announced on LinkedIn that some opportunities fell through and I found myself unexpectedly on the market looking for work. It has been a while since I've given an update on what was going on and what I've been up to.

The Search Begins (Anew!)

After watching both opportunities fall through, I began my job search again, leading to me applying to over fifty different companies, focusing on leadership roles (think Team Lead, Engineering Manager, or Director of Engineering). Even though I have quite a few years of recent experience (I've been leading teams in some capacity since 2018), the most common piece of feedback is that it seemed like my recent roles were more technical than leadership.

That's fair feedback as my leadership style is that I wouldn't ask a member of my team to do something that I wouldn't do and I'm naturally curious about how things work, so there are times when I've rolled up my sleeves to help the team get projects done and/or be the technical lead that the team needed.

So, let's try a different approach.

I retooled my resume to highlight my technical accomplishments and applied for more experienced technical roles (think Senior Software Engineer, Staff Software Engineer, Principal Engineer, and Architect). Given the first round of feedback, I figured this would be a shoe-in, right?

Not quite...

Most of the feedback I got for these roles is that I wasn't hands-on technical enough (though my most recent engagements had me coding the vast majority of the time), though my leadership skills were solid.

What Do You Want To Be When You Grow Up?

Not going to lie, it's a bit frustrating to be told that you're too technical for leadership, but not technical enough for engineering, especially when you've helped companies launch new products and new offerings.

My theory (potential cope) is that companies don't know what to do when they find someone who's both a strong technical leader and a strong engineer as they don't come across them that often. Most of the companies I've seen typically want their leadership to be able to understand concepts (i.e., what is continuous deployment pipeline), but not enough to implement or fix it when there are issues. For the engineering side, my experience was that they wanted people who were deep in the technical weeds (we're talking edge cases, knowing the docs cold), but didn't ask a ton about how the work fits into the bigger picture of the business.

For me, I need a combination of both to be happy. I enjoy doing the technical work, building new tools/products to solve problems as I'm great at finding issues with a process and making it smoother.

On the other hand, I thoroughly enjoy leading people and coaching up a team. That was one of the original inspirations for this blog, The Software Mentor, a way to give back and be an (unofficial) mentor to those who don't have someone to level up from.

I can't just ignore one side of the equation, that's throwing away half my strengths.

So what do you do?

In my case, change the game.

A Different Approach

Over my career, I've been lucky to work at quite a few companies and have built great relationships everywhere I went. In addition, I've spent the last ten years building up a reputation in the community as a technical leader (Microsoft MVP since 2017 and have been presenting on technical concepts since 2015).

I figured if I can't get work as an engineer or as a leader, why not try something new.

Back in 2014, some friends and I had started on an ill-fated attempt to build a replacement Point of Sale system to be used by liquor stores (this would have been before tools like Square would become ubiquitous). Though the project never launched, we gave it a name, Small Batch Software as it was both a tip of the cap to small batch distilling and a nod to working in small batches (a la Lean Manufacturing with batch sizes of 1).

We always joked that Small Batch might take off at some point, but we retired the project and the idea of Small Batch was retired (like most ideas go).

Fast forward to 2021, I started taking on some side work, helping other companies with their implementation and process improvement. At the time, I didn't have a formal company, but I started thinking more about forming a company and working through that.

When I left Rocket Mortgage in 2023, I decided to form my own company, Small Batch Solutions LLC to help me with a more formal approach for moonlighting, partly to get some experience and partly to see how it would go.

However, like most things, it was a good idea, but I didn't put the required energy into the company, so it has been mostly dormant since then.

Which brings us to the present.

Small Batch Solutions - Iteration One

After pouring more energy into the company over the past two months, I've had success in landing clients, focusing on what I enjoy doing the most:

  • Mentoring others, helping them advance in their career and technical skills
  • Problem solving, figuring out pain points and solving them simply

Given these successes, I've chosen to focus on Small Batch Solutions full-time, allowing me to continue building with the community and also allowing me to use my strengths without having to fit in a single "box".

If you've ever found yourself thinking, "I wish I could work with someone who just gets it and can help me", then reach out, I think I might be able to help!

Today I Learned - JavaScript Private Fields and Properties

One of my favorite times of year has started, intern season! I always enjoy getting a new group of people who are excited to learn something new and are naturally curious about everything!

As such, one of the first coding katas we complete is Mars Rover as it's a great exercise to introduce union types, records, functions, and some of the basic array operators (map and reduce). It also provides a solid introduction to automated testing practices (Arrange/Act/Assert, naming conventions, test cases). Finally, you can solve it multiple ways and depending on the approach, lends itself to refactoring and cleaning up.

Now my preferred approach to the kata is to leverage functional programming (FP) techniques, however, it wouldn't be correct to only show that approach, so I tackled it using more object-oriented (OO) instead.

One of the things that we run into pretty quickly is that we're going to need a Rover class that will have the different methods for moving and turning. Since the Rover will need to keep track of its X, Y, and Direction, I ended up with the following:

1
2
3
4
5
type Direction = "North" | "South" | "East" | "West";
class Rover {
  constructor(private x:number, private y:number, private direction:Direction){}
  // moveForward, moveBackward, turnLeft, and turnRight definitions below...
}

This approach works just fine as it allows the caller to give us a starting point, but they can't manipulate X, Y, or Direction directly, they have to use one of the methods (i.e., we have encapsulation).

The Problem

However, we run into a slight problem once we get to the user interface. We would like to be able to display the Rover's location and direction, however, we don't have a way of accessing that data since we marked those as private.

In other words, we can't do the following:

const rover = new Rover(0, 0, 'North');
console.log(`Rover is at (${r.x}, ${r.y}) facing ${r.direction}`)

One way to fix this problem is to remove the private modifier and allow the values to be public, however, this would mean that the state of my object could be manipulated either through it's methods (i.e., moveForward) or through global access rover.X = 100.

What I'd like to do instead is to have a way to get the value to the outside world, but not allow them to modify it.

In languages like C#, we would leverage a public get/private set on properties, which would look something like this:

public class Rover 
{
  public int X {get; private set;}
  public int Y {get; private set;}
  public Direction Direction {get; private set;}
  public Rover (int x, int y, Direction direction)
  {
    X = x;
    Y = y;
    Direction = direction;
  }
}

Let's take a look at how we can build the same idea in TypeScript (and by extension, JavaScript)

Introducing Fields

Introducing fields are simple enough, we just define them in the class like so:

1
2
3
4
5
class Rover {
  private x:number;
  private y:number;
  private direction:Direction;
}

However, if you're working in JavaScript, the private keyword doesn't exist. However, JavaScript still allows you to mark something as private by prefixing # to the name.

1
2
3
4
5
class Rover {
  #x;
  #y;
  #direction;
}

With these fields in place, we now update our constructor to explicitly set the values.

In TypeScript

1
2
3
4
5
constructor(x:number, y:number, direction:Direction){
  this.x = x;
  this.y = y;
  this.direction = direction;
}

In JavaScript

1
2
3
4
5
constructor(x, y, direction){
  this.#x = x;
  this.#y = y;
  this.#direction = direction;
}

At this point, we can update the various methods (moveForward, moveBackward, turnLeft, turnRight) to use the fields.

Introducing Properties

With out fields in use, we can now expose their values by defining the get (colloquially known as the getter) for the fields.

In TypeScript

1
2
3
4
5
6
7
8
9
get x(): number {
  return this.x;
}
get y(): number {
  return this.y;
}
get direction(): Direction {
  return this.direction;
}

In JavaScript

1
2
3
4
5
6
7
8
9
get x() {
  return this.#x;
}
get y() {
  return this.#y;
}
get direction() {
  return this.#direction;
}

With our properties in place, the following code will work now

1
2
3
4
5
6
const rover = new Rover(4, 2, 'North');
console.log(`Rover is at (${rover.X}, ${rover.Y}) facing ${rover.Direction})`);
// prints "Rover is at (4, 2) facing North

// But this doesn't work
rover.X = 100; // can't access X

Closing Thoughts

When working with data that requires different access levels, think about leveraging private fields and then providing access through public properties.

Simplifying Console Logic with the Model-View-Update

When I first started dabbling in Functional Programming, a new front-end language called Elm had been released and it was generating a lot of buzz about how it simplified web development by introducing four parts (i.e., The Elm Architecture" (TEA)) that provided a mental model when creating web pages. This way of thinking was so powerful that it inspired popular libraries like Redux and ngrx which took this architecture mainstream.

Spilling the TEA

At a high level, the architecture has four parts:

  1. Model -> What are we rendering?
  2. View -> How do we want to render it?
  3. Update -> Given the current model and a Command, what's the new model?
  4. Command -> What did the user do?

To help make this a bit more clear, let's define some types for these parts and see how they would work together

type Model = any;
type View = (m:Model)=>Promise<Command>;
type Update = (m:Model, c:Command)=>Model;
type Command = "firstOption" | "secondOption" ... | 'quit';

async function main(model:Model, view:View, update:Update): Promise<void>{
  const command = await view(model);
  if (command === 'quit'){
    return;
  }
  const newModel = update(model, command);
  return main(newModel);
}

With some types in play, let's go ahead and build out a small application, a counter (the "Hello World" for Elm).

Building a Counter

First, we need to figure out what the model will be. Since we're only keeping tracking of a number, we can define our model as a number.

type Model = number;

Next, we need to define what the user can do. In this case, they can either increment, decrement, or quit so let's set the command up.

type Command = 'increment' | 'decrement' | 'quit';

Now that we have Command, we can work on the update function. Given the type signature from before, we know its going to look like this:

1
2
3
function update(model:Model, command:Command): Model {
  // logic
}

We can leverage a switch and put in our business rules

1
2
3
4
5
6
7
function update(model:Model, command:Command): Model {
  switch(command){
    case 'increment': return model+1;
    case 'decrement': return model-1;
    case 'quit': return model;
  }
}

Finally, we need to define our view function. Like before, we can get the skeleton for the function based on the types from earlier.

1
2
3
async function view(model:Model): Promise<Command>{

}

Let's update the function with our rendering logic

1
2
3
4
async function view(model:Model): Promise<Command>{
  console.log("Counter:", model);
  console.log("Choose to (i)ncrement, (d)ecrement, or (q)uit");
}

We've got our render up and running, however, we need to get input from the user. Since we're working within Node, we could use readline, however, I've recently been using @inquirer/prompts and find it to be a nice abstraction to use. So let's use that package.

import {input} from "@inquirer/prompts";

async function getChoice(): Promise<Command>{
  console.log("Choose to (i)ncrement, (d)ecrement, or (q)uit");
  const validChoices = ["i", "d", "q"];
  const validator = (s:string) => validChoices.include(s?.trim().toLowerCase());
  const selection = await input({message:message, validate:validator});
  if (selection === "i") {
    return "increment";
  } else if (selection === "d"){
    return "decrement";
  } else {
    return "terminate"
  }
}
// Let's change the view function to use getChoice

async function view(model:Model): Promise<Command>{
  console.log("Counter:", model);
  return getChoice();
}

With these pieces defined, we can use the main function from before.

async function main(model:Model, view:View, update:Update): Promise<void>{
  const command = await view(model);
  if (command === 'quit'){
    return;
  }
  const newModel = update(model, command);
  return main(newModel);
}

// Invoking Main
main(10, view, update);

Starting Back at Zero

Now that we have increment and decrement working, it would be nice to be able to reset the counter without having to restart the application, so let's see how bad that would be.

First, we need to add a new choice to Command (called reset). This will force us to update the rest of the code that's working with Command.

type Command = "increment" | "decrement" | "reset" | "quit";

Next, we need to update the update function so it knows how to handle a reset command. In our case, we need to set the model back to zero.

1
2
3
4
5
6
7
8
function update(model:Model, command:Command): Model {
  switch(command){
    case 'increment': return model+1;
    case 'decrement': return model-1;
    case 'reset': return 0;
    case 'quit': return model;
  }
}

At this point, the application knows how to handle the new Command, however, we need to update our view function to allow the user to select reset.

async function view(model:Model): Promise<Command>{
  console.log("Counter:", model);
  return getChoice();
}

async function getChoice(): Promise<Command>{
  // updating the console.log
  console.log("Choose to (i)ncrement, (d)ecrement, (r)eset, or (q)uit"); 
  const validChoices = ["i", "d", "r", "q"];
  const validator = (s:string) => validChoices.include(s?.trim().toLowerCase());
  const selection = await input({message:message, validate:validator});
  if (selection === "i") {
    return "increment";
  } else if (selection === "d"){
    return "decrement";
  } else if (selection === "r"){
    return "reset";
  } else {
    return "terminate"
  }
}

What's Next?

Now that we have have a working version, you could start implementing some fun functionality. For example, how would you allow someone to set how much to increment or decrement by? What if you needed to keep track of previous values (i.e., maintaining history)? I highly encourage you trying this out with a simple kata (for example, how about giving Mars Rover a try?)

Full Working Solution

import {input} from "@inquirer/prompts";

type Model = number;
type Command = "increment" | "decrement" | "reset" | "quit";
type View = (model:Model) => Promise<Command>;
type Update = (model:Model, command:Command) => Model;

function update(model:Model, command:Command): Model {
  switch(command){
    case "increment": return model+1;
    case "decrement": return model-1;
    case "reset": return 0;
    case "quit": return model;
  }
}

function view(model:Model): Promise<Command>{
  console.log(`Counter:${model}`);
  return getChoice();
}

async function getChoice(): Promise<Command>{
  console.log("Choose to (i)ncrement, (d)ecrement, (r)eset, or (q)uit"); 
  const validChoices = ["i", "d", "r", "q"];
  const validator = (s:string) => validChoices.include(s?.trim().toLowerCase());
  const selection = await input({message:message, validate:validator});
  if (selection === "i") {
    return "increment";
  } else if (selection === "d"){
    return "decrement";
  } else if (selection === "r"){
    return "reset";
  } else {
    return "terminate"
  }
}

async function main(model:Model, view:View, update:Update): Promise<void>{
  const command = await view(model);
  if (command === 'quit'){
    return;
  }
  const newModel = update(model, command);
  return main(newModel, view, update);
}

main(10, view, update);

Leveraging Tuples in TypeScript

In preparation for StirTrek, I'm revisiting my approach for how to implement the game of Blackjack. I find card games to be a great introduction to functional concepts as you hit the major concepts quickly and the use cases are intuitive.

Let's take a look at one of the concepts in the game, Points.

Blackjack is played with a standard deck of cards (13 Ranks and 4 Suits) where the goal is to get the closest to 21 points without going over. A card is worth Points based on its Rank. So let's go ahead and model what we know so far.

1
2
3
type Rank = "Ace" | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | "Jack" | "Queen" | "King"
type Suit = "Hearts" | "Clubs" | "Spades" | "Diamonds"
type Card = {readonly rank:Rank, readonly suit:Suit}

We know that a Card is worth points based on its rank, the rule are:

  • Cards with a Rank of 2 through 10 are worth that many points (i.e., 2's are worth 2 points, 3's are worth 3 points, ..., 10's are worth 10 points)
  • Cards with a Rank of Jack, Queen, or King are worth 10 points
  • Cards with a Rank of Ace can be worth either 1 or 11 points (depending on which one is the most advantageous)

Let's explore the Ace in more detail.

For example, if we had a hand consisting of an Ace and a King, then it could be worth either 11 (treating the Ace as a 1) or as 21 (treating the Ace as an 11). In this case, we'd want to treat the Ace as an 11 as that gives us 21 exactly (specifically, a Blackjack).

In another example, if we had a hand consisting of an Ace, 6, and Jack, then it could either be worth 17 (treating the Ace as a 1) or 27 (treating the Ace as an 11). Since 27 is greater than 21 (which would cause us to bust), we wouldn't want the Ace to be worth 11.

Creating cardToPoints

Now that we have this detail, let's take a look at trying to write the cardToPoints function.

function cardToPoints(c:Card): Points { // Note we don't know what the type of this is yet
  switch(c.rank) {
    case 'Ace': return ???
    case 'King': return 10;
    case 'Queen': return 10;
    case 'Jack': return 10;
    default:
      return c.rank; // we can do this because TypeScript knows all the remaining options for Rank are numbers
  }
}

At this point, we don't know how to score Ace because we would need to know the other cards to get points for. Since we don't have that context here, why not capture both values?

function cardToPoints(c:Card): Points { // Note we don't know what the type of this is yet
  switch(c.rank) {
    case 'Ace': return [1,11];
    case 'King': return 10;
    case 'Queen': return 10;
    case 'Jack': return 10;
    default:
      return c.rank; // we can do this because TypeScript knows all the remaining options for Rank are numbers
  }
}

In TypeScript, we can denote a tuple by using []. Going forward, TypeScript knows that it's a two element array and guarantees that we can index using 0 or 1.

This works, however, anything using cardToPoints has to deal with that it could either be a number or a tuple.

When I come across cases like this, I reach for setting up a sum type to model each case.

1
2
3
type Hard = {tag:'hard', value:number};
type Soft = {tag:'soft', value:[number,number]}; // note that value here is a tuple of number*number
type Points = Hard | Soft

Now, when I call cardToPoints, I can use the tag field to know whether I'm working with a number or a tuple.

Adding Points Together

A common workflow in Blackjack is to figure out how many points someone has. At a high level, we'd want to do the following

  • Convert each Card to Points
  • Add all the Points together

Summing things together is a common enough pattern, so we know our code is going to look something like this:

1
2
3
function handToPoints(cards:Card[]): Points {
  return cards.map((c)=>cardToPoints(c)).reduce(SOME_FUNCTION_HERE, SOME_INITIAL_VALUE_HERE);
}

We don't have the reducer function defined yet, but we do know that it's a function that'll take two Points and return a Points. So let's stub that out.

1
2
3
function addPoints(a:Points, b:Points): Points {
  // implementation
}

Since we modeled Points as a sum type, we can use the tag field to go over the possible cases

function addPoints(a:Points, b:Points): Points {
  if (a.tag === 'hard' && b.tag === 'hard') {
    // logic
  }
  if (a.tag === 'hard' && b.tag === 'soft'){
    // logic
  }
  if (a.tag === 'soft' && b.tag === 'hard'){
    // logic
  } 
  // last case is both of them are soft
}

With this skeleton in place, let's start implementing each of the branches

Adding Two Hard Values

The first case is the easiest, if we have two hard values, then we add their values together. So a King and 7 us a 17 for example.

1
2
3
4
function addHardAndHard(a:Hard, b:Hard): Points { // note that I'm defining a and b as Hard and not just Points
  const value = a.value + b.value;
  return {tag:'hard', value};
}

With this function defined, we can update addPoints like so

1
2
3
4
5
6
function addPoints(a:Points, b:Points): Points {
  if (a.tag === 'hard' && b.tag === 'hard'){
    return addHardAndHard(a,b);
  }
  // other branches
}

Adding Hard and Soft

The next two cases are the same, where we're adding a Hard value to a Soft value. For example, we're adding a 6 to an Ace. We can't assume that the answer is 7 since that might not be what the player wants. We also can't assume that the value is 17 because that might not be to the players advantage, which means that we need to keep track of both options, which implies that the result would be a Soft value. Let's go ahead and write that logic out

1
2
3
4
function addHardAndSoft(a:Hard, b:Soft): Points { // note that a is typed to be Hard and b is typed as Soft
  const [bLow, bHigh] = b.value; // destructuring the tuple into specific pieces
  return {tag:'soft', value:[a.value+bLow, a.value+bHigh]};
}

With this function in place, we can write out the next two branches

function addPoints(a:Points, b:Points): Points {
  if (a.tag === 'hard' && b.tag === 'hard'){
    return addHardAndHard(a, b);
  }
  if (a.tag === 'hard' && b.tag === 'soft'){
    return addHardAndSoft(a, b);
  }
  if (a.tag === 'soft' && b.tag === 'hard'){
    return addHardAndSoft(b, a); 
  }
  // remaining logic
}

Adding Soft and Soft

The last case we need to handle is when both Points are Soft. If we were to break this down, we have four values (aLow, aHIgh for a, and bLow,bHigh for b) we need to keep track of:

  1. aLow + bLow
  2. aHigh + bLow
  3. aLow + bHigh
  4. aHigh + bHigh

However, let's play around with this by assuming that Points in question are both Ace. We would get the following:

  1. aLow + bLow = 1 + 1 = 2
  2. aHigh + bLow = 11 + 1 = 12
  3. aLow + bHigh = 1 + 11 = 12
  4. aHigh + bHigh = 11 + 11 = 22

Right off the bat, we can discard the case 4, (aHigh + bHigh), because there is no situation where the player would want that score as they would bust.

For cases 2 and 3, they yield the same value, so they're essentially the same case.

Which means, that our real cases are

  1. aLow + bLow
  2. aHigh + bLow (which is the same as aLow + bHigh)

So let's go ahead and write that function

1
2
3
4
5
function addSoftAndSoft(a:Soft, b:Soft): Points {
  const [aLow, aHigh] = a.value;
  const [bLow] = b.value; // note that we're only grabbing the first element of the tuple here
  return {tag:'soft', value:[aLow+bLow, aHigh+bLow]};
}

Which gives us the following for addPoints

function addPoints(a:Points, b:Points): Points {
  if (a.tag === 'hard' && b.tag === 'hard'){
    return addHardAndHard(a, b);
  }
  if (a.tag === 'hard' && b.tag === 'soft'){
    return addHardAndSoft(a, b);
  }
  if (a.tag === 'soft' && b.tag === 'hard'){
    return addHardAndSoft(b, a);
  }
  return addSoftAndSoft(a as Soft, b as Soft);
}

Now that we have addPoints, let's revisit handToPoints

1
2
3
4
5
6
7
8
// Original Implementation
// function handToPoints(cards:Card[]): Points {
//   return cards.map((c)=>cardToPoints(c)).reduce(SOME_FUNCTION_HERE, SOME_INITIAL_VALUE_HERE;
// }

function handToPoints(cards:Card[]): Points {
  return cards.map((c)=>cardToPoints(c)).reduce(addPoints, SOME_INITIAL_VALUE_HERE);
}

Now we need to figure out what SOME_INITIAL_VALUE_HERE would be. When working with reduce, a good initial value would be what would we return if we had no cards in the hand? Well, they would have 0 points, right? We can use 0, but we can't just return 0 since our function returns Points, so we need to go from 0 to Points. Easy enough, we can use Hard to accomplish this.

1
2
3
4
5
6
7
function handToPoints(cards:Card[]): Points {
  const initialValue:Points = {tag:'hard', value:0};
  return cards.map((c)=>cardToPoints(c)).reduce(addPoints, initialValue);
}

const hand = [{rank:'Ace', suit:'Hearts'}, {rank:7, suit:'Clubs'}]
console.log(handToPoints(hand)); // {tag:'soft', value:[8, 18]};

For those who know a bit of category theory, you might notice that addPoints is the operation and Hard 0 is the identity for a monoid over Points.

One Last Improvement

So this code works and everything is fine, however, we can make one more improvement to addPoints. Let's take a look at what happens when we try to get the Points for the following:

1
2
3
4
5
6
7
8
const hand: Card[] = [
  {rank:'Ace', suit:'Diamonds'},
  {rank:8, suit:'Hearts'},
  {rank:4, suit:'Clubs'},
  {rank:8, suit:'Spades'}
]

console.log(handToPoints(hand)); // {tag:'soft', value:[21, 31]};

Huh, we got the right value, but we know that for Soft, it doesn't make sense to allow the player a choice between 21 and 31 because 31 is always invalid. Even though the answer isn't wrong per se, it does allow the user to do the wrong thing later on, which isn't the greatest.

Let's add one more function, normalize that will check to see if the Points is Soft with a value over 21. If so, we convert to a Hard and throw out the value over 21. Otherwise we return the value (since it's possible for someone to get a Hard score over 21).

function normalize(p:Points): Points {
  if (p.tag === 'soft' && p.value[1] > 21){
    return {tag:'hard', value:p.value[0]}
  }
  return p;
}

// updated addPoints with normalize being used
function addPoints(a:Points, b:Points): Points {
  if (a.tag === 'hard' && b.tag === 'hard'){
    return normalize(addHardAndHard(a, b));
  }
  if (a.tag === 'hard' && b.tag === 'soft'){
    return normalize(addHardAndSoft(a, b));
  }
  if (a.tag === 'soft' && b.tag === 'hard'){
    return normalize(addHardAndSoft(b, a));
  }
  return normalize(addSoftAndSoft(a as Soft, b as Soft));
}

// Note: There's some minor refactoring that we could do here (for example, creating an internal function for handling the add logic and updating `addPoints` to use that function with normalize),
// but will leave that as an exercise to the reader :)

Wrapping Up

In this post, we took a look at using tuples in TypeScript by tackling a portion of the game of Blackjack. Whether it's through using it in types (like we did for Soft) or for destructuring values (like we did in the various addX functions), they can be a handy way of grouping data together for short-term operations.

Interested in knowing more?

If you've enjoyed the above, then you might be interested in my new course (launching Summer 2025) where we build out the game of Blackjack using these concepts in TypeScript. Click here if you're interested in getting an update for when the course goes live!

Tips and Tricks with TypeScript

One of my most recent projects has been tackling how to model the card game, Love Letter. For those who've seen me present my How Functional Programming Made Me a Better Developer talk, you might recall that this was my second project to tackle in F# and that even though I was able to get some of it to work, there were some inconsistency in the rules that I wasn't able to reason about.

While implementing in TypeScript, I came across some cool tricks and thought I'd share some of them here, enjoy!

Swapping two variables

A common enough task in programming, we need to swap two values around. When I was learning C++, I had a memory trick to remember the ordering (think like a zigzag pattern)

1
2
3
4
5
6
7
int a = 100;
int b = 200;

// Note how a starts on the right, then goes left
int temp = a;
a = b;
b = temp;

You can do the same thing in TypeScript, however, you can remove the need for the temp variable by using array destructuring. The idea is that we create an array which contains the two variables to swap. We then assign this array to destructured variables (see below)

1
2
3
4
5
6
let a:number = 100;
let b:number = 200;

// using array destructuring

[a,b] = [b,a]

Drawing a Card

One of the common interactions that happens in the game is that we need to model drawing a card from a deck. As such, we will have a function that looks like the following

1
2
3
4
5
6
7
8
type Card = "Guard" | "Priest" | "Baron" | .... // other options omitted for brevity
type Deck = Card[]

function draw(d:Deck): {card:Card, restOfDeck:Deck} {
  const card = d[0];
  const restOfDeck = d.slice(1);
  return {card, restOfDeck};
}

This absolutely works, however, we can use the rest operator (...) with array destructuring to simplify this code.

1
2
3
4
5
6
7
type Card = "Guard" | "Priest" | "Baron" | .... // other options omitted for brevity
type Deck = Card[]

function draw(d:Deck): {card:Card, restOfDeck:Deck} {
  const [card, ...restOfDeck] = d; // note the rest operator before restOfDeck
  return {card, restOfDeck};
}

This code should be read as assign the first element of the array to a const called card and the rest of the array to a const called restOfDeck.

Drawing Multiple Cards with Object Destructuring

function draw(d:Deck): {card:Card, restOfDeck:Deck} {
  const [card, ...restOfDeck] = d;
  return {card, restOfDeck};
}

function drawMultipleCads(d:Deck, count:number): {cards:Card[], restOfDeck:Deck} {
  let currentDeck = d;
  const cards:Card[] = [];
  for (let i = 0; i < count; i++) {
    const result = draw(currentDeck)
    cards.push(result.card)
    currentDeck = result.restOfDeck
  }
  return {cards, restOfDeck:currentDeck};
}

This works, however, we don't actually care about result, but the specific values card and restOfDeck. Instead of referencing them via property drilling, we can use object destructuring to get their values.

function drawMultipleCads(d:Deck, count:number): {cards:Card[], restOfDeck:Deck} {
  let currentDeck = d;
  const cards:Card[] = [];
  for (let i = 0; i < count; i++) {
    const {card, restOfDeck} = draw(currentDeck) // note using curly braces
    cards.push(card)
    currentDeck = restOfDeck
  }
  return {cards, restOfDeck:currentDeck};
}

Today I Learned: Triggering PowerAutomate Through Http Request

If you're working within the Microsoft ecosystem, you might have ran across a tool called PowerAutomate, this is Microsoft's low-code solution for building out simple automation (similar to tools like If This Then Than (IFTTT)).

A common use I've used with PowerAutomate is integrating with Teams to kick off a workflow when someone posts a message. For example, you could have a trigger listening for when someone posts in your support channel to auto-respond back with a canned message and tagging whoever your support person or support team is.

The problem I've ran into with PowerAutomate is that it has a steep learning curve and isn't very intuitive for engineers to pick up. As such, we typically use PowerAutomate for simpler flows (or if we need to integrate with another Microsoft product like Teams or Planner).

Recently, I was tackling a problem where we had an application that needed to post a message into a Teams channel. Back in the day, I would have use a webhook and published the message that way, however, Microsoft removed the ability to use Webhooks into Teams in late 2024. The main solution going forward is that you would have had to build a Teams application and install it for your tenant/channel. This can be a bit annoying if all you have is a simple workflow that you need to stand-up.

This got me thinking, I know I can use PowerAutomate to send a message to Teams, but can I trigger the flow programmatically (at this point, I was familiar with timers and when something happened).

Doing some digging, I found that PowerAutomate can trigger a workflow off of a HTTP Request.

Jackpot!

Even better, we can give the PowerAutomate flow a specific payload to expect as part of the request body, even better!

Combining these concepts, I can create a flow like the following:

First, let's set up a trigger that takes in a payload. In this case, the caller will give us the message content and a bit more info on which team and channel to post this message into.

Setting up the trigger

After setting up the trigger, we can then add a step for posting a message in Teams. Instead of using the pre-populated options that we get from our connection, we can use the values from the trigger instead.

Setting up the message step

After saving, our entire flow looks like the following

Full flow

From here, we can then use the URL that was created as part of the trigger and have our application invoke the flow.

Today I Learned: Configuring Git to Use A Specific Key for a Specific Repo

When working with a Git repository, there are two ways to authenticate yourself, either by using your name/password or by leveraging an SSH key, proving who you are. I like the SSH key the best since I can create the key for my machine, associate it to my profile, and then I'm good to go.

However, the problem becomes when I have to juggle different SSH keys. For example, I may have two different accounts (personal and a work one) and these accounts can't use the same SSH key (in fact, if you try to add the same SSH key to two different accounts, you'll get an error message).

In these cases, I'd like to be able to specify which SSH key to use for a given repository.

Doing some digging, I found the following configures the sshCommand that git uses

git config core.sshCommand 'ssh -i C:\\users\\<name>\\.ssh\\<name_of_private_key>'

Because we're not specifying a scope (like --global), this will only apply to the single repository.

Today I Learned: Configuring HttpClient via Service Registration

When integrating with an external service via an API call, it's common to create a class the encapsulates dealing with the API. For example, if I was interacting with the GitHub API, I might create a C# class that wraps the HttpClient, like the following:

public interface IGitHubService
{
    Task<string> GetCurrentUsername();
}

public class GitHubService : IGitHubService
{
    private readonly HttpClient _client;
    public GitHubService(HttpClient client)
    {
        _client = client;
    }
    public async Task<string> GetCurrentUsername()
    {
        // code implementation
    }
}

Repetition of Values

This is a great start, but over time, your class might end up like the following:

public class GitHubService
{
    private readonly HttpClient _client;
    public GitHubService(HttpClient client)
    {
        _client = client;
    }
    public async Task<string> GetCurrentUsername()
    {
        var result = _client.GetFromJsonAsync("https://api.github.com/user")
        return result.Login;
    }
    public async Task<List<string>> GetAllUsers()
    {
        var result = _client.GetFromJsonAsync("https://api.github.com/users");
        return result.Select(x => x.Login).ToList();
    }
    public async Task<List<string>> GetTeamNamesForOrg(string org)
    {
        var result = _client.GetFromJsonAsync($"https://api.github.com/orgs/{org}/teams");
        return result.Select(x => x.Name).ToList();
    }
}

Right off the bat, it seems like we're repeating the URL for each method call. To remove the repetition, we could extract to a constant.

public class GitHubService
{
    private readonly HttpClient _client;
    // Setting the base URL for later usage
    private const string _baseUrl = "https://api.github.com";
    public GitHubService(HttpClient client)
    {
        _client = client;
    }
    public async Task<string> GetCurrentUsername()
    {
        var result = _client.GetFromJsonAsync($"{_baseUrl}/user")
        return result.Login;
    }
    public async Task<List<string>> GetAllUsers()
    {
        var result = _client.GetFromJsonAsync($"{_baseUrl}/users");
        return result.Select(x => x.Login).ToList();
    }
    public async Task<List<string>> GetTeamNamesForOrg(string org)
    {
        var result = _client.GetFromJsonAsync($"{_baseUrl}/orgs/{org}/teams");
        return result.Select(x => x.Name).ToList();
    }
}

This helps remove the repetition, however, we're now keeping track of a new field, _baseUrl. Instead of using this, we could leverage the BaseAddress property and set that in the service's constructor.

public class GitHubService
{
    private readonly HttpClient _client;
    public GitHubService(HttpClient client)
    {
        _client = client;
        _client.BaseAddress = "https://api.github.com"; // Setting the base address for the other requests.
    }
    public async Task<string> GetCurrentUsername()
    {
        var result = _client.GetFromJsonAsync("/user")
        return result.Login;
    }
    public async Task<List<string>> GetAllUsers()
    {
        var result = _client.GetFromJsonAsync("/users");
        return result.Select(x => x.Login).ToList();
    }
    public async Task<List<string>> GetTeamNamesForOrg(string org)
    {
        var result = _client.GetFromJsonAsync($"/orgs/{org}/teams");
        return result.Select(x => x.Name).ToList();
    }
}

I like this refactor because we remove the field and we have our configuration in one spot. That being said, interacting with an API typically requires more information than just the URL. For example, setting up the API token or that we're always expecting JSON for the response. We could add the header setup in each method, but that seems quite duplicative.

Leveraging Default Request Headers

We can centralize our request headers by leveraging the DefaultRequestHeaders property and updating our constructor.

public class GitHubService
{
    private readonly HttpClient _client;
    public GitHubService(HttpClient client)
    {
        _client = client;
        _client.BaseAddress = "https://api.github.com";
        _client.DefaultRequestHeaders.Add("Accept", "application/vnd.github+json");
        _client.DefaultRequestHeaders.Add("Authentication", $"Bearer {yourTokenGoesHere}");
        _client.DefaultRequestHeaders.Add("X-GitHub-Api-Version", "2022-11-28");
    }
    public async Task<string> GetCurrentUsername()
    {
        var result = _client.GetFromJsonAsync("/user")
        return result.Login;
    }
    public async Task<List<string>> GetAllUsers()
    {
        var result = _client.GetFromJsonAsync("/users");
        return result.Select(x => x.Login).ToList();
    }
    public async Task<List<string>> GetTeamNamesForOrg(string org)
    {
        var result = _client.GetFromJsonAsync($"/orgs/{org}/teams");
        return result.Select(x => x.Name).ToList();
    }
}

I like this refactor because all of our configuration of the service is right next to how we're using it, so easy to troubleshoot. At this point, we would need to register our service in the Inversion of Control (IoC) container and then everything would work.

Generally, you'll find this in Startup.cs and would look like:

services.AddTransient<IGitHubService, GitHubService>();

An Alternative Approach for Service Registration

However, I learned that when you're building a service that's wrapping an HttpClient, there's another service registration method you could use, AddHttpClient with the Typed Client approach.

Let's take a look at what this would look like.

1
2
3
4
5
6
7
8
// In Startup.cs

services.AddHttpClient<IGitHubService, GitHubService>(client => {
    client.BaseAddress = new Uri("https://api.github.com");
    client.DefaultRequestHeaders.Add("Accept", "application/vnd.github+json");
    client.DefaultRequestHeaders.Add("Authorization", $"Bearer {apiTokenGoesHere}");
    client.DefaultRequestHeaders.Add("X-GitHub-Api-Version", "2022-11-28");
});

We've essentially moved our configuration logic from the GitHubService to the IoC container, simplifying the service.

public class GitHubService : IGitHubService
{
    private readonly HttpClient _client;
    public GitHubService(HttpClient client)
    {
        _client = client;
    }
    public async Task<string> GetCurrentUsername()
    {
        var result = _client.GetFromJsonAsync("/user")
        return result.Login;
    }
    public async Task<List<string>> GetAllUsers()
    {
        var result = _client.GetFromJsonAsync("/users");
        return result.Select(x => x.Login).ToList();
    }
    public async Task<List<string>> GetTeamNamesForOrg(string org)
    {
        var result = _client.GetFromJsonAsync($"/orgs/{org}/teams");
        return result.Select(x => x.Name).ToList();
    }
}

My Thoughts

Even though this is a new approach, I'm kind of torn if I like it or not. On one hand, I appreciate that we can centralize the logic in one spot so that everything for the GitHubService is one spot. However, if we needed other dependencies to configure the service (for example, we needed to get the bearer token from AppSettings), I could see this getting a bit more complicated, though contained.

On the other hand, we could shift all that config to the IoC and let it deal with that. It definitely streamlines the GitHubService so we can focus on the endpoints and their logic, however, now I've got to look for two spots to see where the client is being configured.