Skip to content

2024

My Experience Preparing for the Azure Administrator Associate (AZ-104) Exam

There's been a bit of a lull the past couple of weeks on the blog as I've been focusing my time on studying and preparing for the AZ-104 exam. This was a particularly challenging certificate for me as I don't have a traditional IT Admin background so I had to not only shore up the gaps in that knowledge, but then also had to learn how to model similar concepts in Azure.

That being said, I was able to pass the exam on my first take and wanted to share some advice for those who are looking to take this or other Azure exams.

Expand your studying outside of the Microsoft Learn documentation

The Microsoft Learn docs are fine for doing a deep dive into a subject, but if it's the first time learning a concept, then they can be a bit rough as they assume you have knowledge that you might not. To help round out your learning, I recommend finding other resources like videos, books, or articles.

Build and Experiment in Azure

Given that most of these concepts are pretty abstract, I found that they stuck with me much more when I build out the resources. For example, when working with a Virtual Machine, all of its components need to be in the same region. You can either remember that text OR you know that has to be true because if you try building out the VM in Azure and try to change components, it's going to fail.

Don't Rely Solely on Practice Exams

Back in 2019, I was studying/preparing for the 483 (exam on C#) and the advice at the time was to go over the practice exams over and over again until things stuck. Following the same advice, I took tons of practice exams (through MS Learn and MeasureUp) and though they might have had the same format as the exam (multiple choice, drag-and-drop, etc...), none of them were a good stand-in for the real exam.

The reason being is that the exam questions likely won't ask you to define a term, but are more likely to be along the lines of how you'd solve a problem (which expects you to know the terminology inherently). So if you don't have the underlying knowledge, you're going to have a bad time trying to answer the questions.

Where the practice exams shone was helping me identify areas that I needed to focus more on. For example, if I struggled in the Networking section, then I know I needed to revisit concepts there. This helped me make the most of my studying time.

Using Generative AI To Help Understand Concepts

Given that I don't come from a networking/IT background, there were some concepts that were quite confusing to me. For example, I was trying to understand why I would need System routes if we already had Network Security Groups and Copilot was able to give me the following:

Asking Copilot why I would need system routes

To help make sure I didn't fall victim to hallucinations, I followed up on the links that Copilot provided to make sure that I understood the concepts, but given that I learn best by asking questions, this was a major win for me since you can't ask questions to books/videos.

Resources That Helped Me

For those looking to study up on this exam, I had success using these resources. Note: I do not receive compensation for these recommendations.

Today I Learned - Using TypeSpec to Generate OpenAPI Specs

Recently, I was doing analysis for a project where we needed to build out a set of APIs for consumers to use. Even though I'm a big believer of iterative design, we wanted to have a solid idea of what the routes and data models were going to look like.

In the past, I most likely would have generated a .NET Web API project, created the controllers/models, finally leveraging NSwag to generate Swagger documentation for the api. Even though this approach works, it does take more time on the implementation side (spinning up controllers, configuring ASP.NET, creating the models, adding attributes). In addition, if the actual API isn't being written in with .NET, then this code becomes throwaway pretty quickly.

Since tooling is always evolving, I stumbled across another tool, TypeSpec. Heavily influenced by TypeScript, this allows you to write your contracts and models that, when compiled, produces an OpenAPI compliant spec.

As a bonus, it's not restricted to just API spec as it has support for generating JSON schemas and gRPC's Protocol Buffers (protobuf)

Getting Started

All code for this post can be found on my GitHub.

Given that it's inspired by TypeScript, the tooling requires having Node installed (at least 20, but I'd recommend the long-term-supported (LTS) version).

From there, we can install the TypeSpec tooling with.

npm install @typespec/compiler

Even though this is all the tooling that's required, I'd recommend installing an extension for either Visual Studio or Visual Studio Code so that you can get Intellisense and other visual cues while you're writing the code.

Bootstrapping the project

Now that we've got the tooling squared away, let's create our project.

1
2
3
mkdir bookstore-api # let's make a directory to hold everything
cd bookstore-api
tsp init --template rest

Enter a project name and choose the defaults. Once it's finished bootstrapping, you can install necessary dependencies using tsp install.

Building Our First API

For our bookstore application, let's say that we want to have an inventory route where someone can retrieve information about a book.

For this work, I'm picturing the following

# Route -> GET api/inventory/{id}
# Returns 200 or 404

In the project, locate the main.tsp file and add the following

1
2
3
4
5
6
7
8
9
using TypeSpec.Http;
using TypeSpec.Rest;

@service({
    title: "Bookstore Service"
})
namespace Bookstore {

}

After adding this code, run tsp compile . (note the period). This will create a file in the tsp-output/@typespec/openapi3 folder, openapi.yaml.

We can open that file and see what our OpenAPI spec looks like

1
2
3
4
5
6
7
openapi: 3.0.0
info:
  title: Bookstore Service
  version: 0.0.0
tags: []
paths: {}
components: {}

So far, not much to look at. However, if we copy this code and render feed it to an online render (like https://editor.swagger.io/), we'll get a message about no operations.

Swagger.io saying there are no operations

Let's change that by building out our GET endpoint.

Back in main.tsp, let's add more code to our Bookstore namespace.

1
2
3
4
5
6
7
namespace Bookstore {
    @tag("Inventory")
    @route("inventory")
    namespace Inventory {
        @get op getBook(@path bookId:string): string
    }
}

After running tsp compile ., we'll see that our yaml has been updated and if we render it again, we'll have our first endpoint

Swagger.io rendering inventory by ID route

This is closer to what we want, however, we know that we're returning back a string, but a Book.

For this exercise, we'll say that a Book has the following:

  • an id (of number)
  • a title (of string)
  • a price (of number, minimum 1)
  • author name (of string)

Let's add this model to main.tsp

namespace Bookstore {
    // Note that we've added this to the Bookstore namespace
    model Book {
        id: string;
        title: string;

        @minValue(1)
        price: decimal;

        authorName: string;
    }
    @tag("Inventory")
    @route("inventory")
    namespace Inventory {

        // For our get, we're now returning a Book, instead of a string.
        @get op getBook(@path bookId: string): Book; 
    }
} 

After another run of tsp compile and rendering the yaml file, we see that we have a schema for our get method now.

Swagger showing the updated model

Refactoring a Model

Even though this works, the Book model is a bit lazy as it has the authorName as a property instead of an Author model which would have name (and a bit more information). Let's update Book to have an Author property.

model Author {
    id: string;

    @minLength(1)
    surname: string;

    @minLength(1)
    givenName: string;
}
model Book {
    id: string;
    title: string;

    @minValue(1)
    price: decimal;

    author: Author;
}

After making this change, we can see that we now have a nested model for Book.

Swagger showing both Book and Author model

Handling Failures

We're definitely a step in the right direction, however, our API definition isn't quite done. Right now, it says that we'll always return a 200 status code.

I don't know about you, but our bookstore isn't good enough to generate books with fictitious IDs, so we need to update our contract to say that it can also return 404s.

Back in main.tsp, we're going to change our return type of the @get operation to instead of being a Book, it's actually a union type.

1
2
3
4
5
6
7
8
@get op getBook(@path bookId: string): 
// Either it returns a 200 with a body of Book
{
    @statusCode statusCode: 200;
    @body book: Book;
} | { // Or it will return a 404 with an empty body
    @statusCode statusCode: 404;
};

With this final change, we can compile and render the yaml and see that route can return a 404 as well.

Swagger showing that the route can return a 404 as well

Next Steps

When I first started with TypeSpec, my first thought was that you could put this code under continuous integration (CI) and have it produce the OpenAPI format as an artifact for other teams to pull in and auto-generate clients from.

If you're interested in learning more about that approach, drop me a line at the Coaching Corner and I may write up my results in a future post.

Leadership Playbook - Leading Through Change

There are few fundamental truths in life, one of which is that the only consistent thing is change. Whether that's through a reorganization, someone leaving the team, or the start of a new initiative, we know that what happens today is different from yesterday, and as a leader, the team is looking to you to figure out how to navigate these changes.

In this post, I'll share some tips and tricks for leading the team through change.

Explain the Reasoning

A common mistake I see leaders make is introducing a change without discussing why the change is happening. Let's look at a hypothetical situation where you're introducing the team to pull request templates.

Hey team! Starting next sprint, we're going to start using this new PR template. You can find it ....

On the one hand, the message is clear on what's changing (using new PR template). However, it completely missed the point, why are we making this change? When we don't include the why, we catch people off-guard because they may not immediately understand the problem that the change is supposed to solve.

When we put people in an information vacuum, they draw their own conclusions, which can give the wrong impression behind the change. This in turn, can cause the rumor mill to go into overdrive, making your job much harder.

Let's revisit the same scenario but include the "why" this time.

Hey team! During our last retro, it was brought up that our pull requests descriptions aren't consistent, which makes reviewing them more difficult. To help build consistency, we're going to start using pull request templates. You can find it ....

By making this small change, we can squash misinformation and potential rumor mills because we're clear on the reasoning.

Which is a great segue to...

Be Transparent With the Team

When explaining the reason, don't lie or sugarcoat the reasoning, even if it makes you feel uncomfortable. Your team is smart and they'll know if you're lying to them.

A hot topic nowadays are Return to Office (RTO) plans, with a common reason being "need more in-person collaboration." Even though this provides a "why," it's not backed by metrics or anything measurable. In addition, in-person doesn't necessarily mean more collaboration.

A better approach is to use employee engagement surveys or customer satisfaction surveys to measure the effectiveness of team or company. If you can't use these metrics (or other relevant metrics), then why are you introducing this change?

Going back to our hypothetical RTO, let's say that the reason we're going back to the office is that we're a start-up whose investors have already paid for our space. To them, having people in the space helps them feel better that their money is well spent (in addition, some companies have a tax break if they moved to your state as long as they have a percentage of people on-site).

This is what you should be telling your team. They may not like the reason and they might disagree, but they know the why and then they can make their own decisions. Long story short, you're giving them the autonomy to make their own decisions because they have all the information.

Repetition

Humans don't have perfect memories, right? So why would we expect that once we introduce a change that's the last time we need to talk about it?

Regardless of the change, don't be surprised if you need to mention it 3 or 4 times before people finally start understanding and applying the change. A great mentor of mine once told me that he would tell people about the change time and time again until it stuck. While this was happening, he would show patience, repeat the messaging, and answer questions and concerns.

When we hear of change, our first step is start processing the message and the immediate impacts. Some people will have questions immediately while others need time to stew on it.

Because of this, be available to answer questions as they come up (even if you've already answered them before). Be prepared for questions during one-on-ones, after meetings, or whenever they come up.

Acknowledge the questions and answer them. If you don't know the answer, tell them that you don't know and that you're going to find the answer.

Giving Space

Don't be surprised that your team exhibits a wide range of emotions for more significant changes.

For example, if your company is doing layoffs, then it's reasonable for people to be upset, depressed, or mentally checked out.

When this happens, you have to give people space to process. This doesn't mean isolating them but being aware that they need some time and accommodate accordingly.

A common mistake I see leaders make here is introducing a change, thinking it was low impact, so they start making other changes. In reality, this change had a high impact, and now the team is under pressure to handle the original change and whatever new commitments are coming their way.

Not only does this reduce your odds of success, but this prevents your team from dealing with the changes, which can turn into stress or frustration. If this happens enough times, people will change teams (or even jobs) just to get a change of scenery and be able to process.

Reducing Change Fatigue

Even though changes are going to happen, do not introduce one change this week, another next week, and then one more two weeks later.

As engineers, we learned that refactoring a codebase should be done in small steps to prevent functionality from breaking.

While this works great for code, this is terrible advice for humans.

When frequent changes happen, it becomes difficult to get into a rhythm with the work and the team, reducing the effectiveness of the team.

When we get changes happening like this, it becomes difficult to get into a rhythm with the work and the team. Especially if large changes keep happening every few weeks.

In the current landscape, we're seeing companies go through multiple rounds of layoffs. While this may keep them out of the news (no one reports that a company laid off ten people, even if it's the fourth time it's happened this year), it causes a feeling of dread for the survivors, as now they're thinking when they'll be next.

Instead of having multiple layoffs, if companies had one (albeit larger) layoff, this would allow people to have time to adjust and proceed without as much paranoia.

Wrapping Up

Navigating change is hard - especially when you aren't just going through the change yourself, but also leading others at the same time.

The next time you're going through a change, try starting here and see how it goes

  • Explain the "why" behind the change.
  • Be transparent about the changes, even if it's uncomfortable.
  • Be mindful that everyone processes change differently and may feel a greater impact or need more time to recover from teh change.

By starting here, your chances of moving forward without significant disruption will improve dramatically.

Today I Learned: Validating Data in Zod

Validating input. You've got to do it, otherwise, you're going to be processing garbage, and that never goes well, right?

Whether it's through the front-end (via a form) or through the back-end (via an API call), it's important to make sure that the data we're processing is valid.

Coming from a C# background, I was used to ASP.NET Web Api's ability to create a class and then use the FromBody attribute for the appropriate route to ensure the data is good. By using this approach, ASP.NET will reject requests automatically that don't fit the data contract.

However, picking up JavaScript and TypeScript, that's not the case. At first, this surprised me because I figured that this would automatically happen when using libraries like Express or Nest.js. Thinking more about it, though, it shouldn't have surprised me. ASP.NET can catch those issues because it's a statically typed/ran language. JavaScript isn't and since TypeScript types are removed during the compilation phase, neither is statically typed at runtime.

When writing validations, I find zod to be a delightful library to leverage. There are a ton of useful built-in options, you can create your own validators (which you can then compose!) and you can infer models based off of your validations.

Building the Amazin' Bookstore

To demonstrate some of the cool things that you can do with Zod, let's pretend that we're building out a new POST endpoint for creating a new book. After talking to the business, we determine that the payload for a new book should look like this:

1
2
3
4
5
6
// A valid book will have the following
// - A non-empty title
// - A numeric price (can't be negative or zero)
// - A genre from a list of possibilities (mystery, fantasy, history are examples, platypus would not be valid)
// - An ISBN which must be in a particular format
// - A valid author which must have a first name, a last name, and an optional middle name

What's in a Name?

Since the Book type needs a valid Author, let's build that out first:

1
2
3
4
5
import {z} from "zod";

export const AuthorSchema = z.object({

});

Since Author will need to be an object, we'll use z.object to signify that. Right off the bat, this prevents a string, number, or other primitive types from being accepted.

1
2
3
AuthorSchema.safeParse("someString"); // will result in a failure
AuthorSchema.safeParse(42); // will result in a failure
AuthorSchema.safeParse({}); // will result in success!

This is a great start, but we know that Author has some required properties (like a first name), so let's implement that by using z.string()

1
2
3
export const AuthorSchema = z.object({
    firstName: z.string()
});

With this change, let's take a look at our schema validation

1
2
3
AuthorSchema.safeParse({}); // fails because no firstName property
AuthorSchema.safeParse({firstName:42}); // fails because firstName is not a string
AuthorSchema.safeParse({firstName: "Cameron"}); // succeeds because firstName is present and a string

However, there's one problem with our validation. We would allow an empty firstName

AuthorSchema.safeParse({firstName:""}); // succeeds, but should have failed :(

To make our validation stronger, we can update our firstName property to have a minimum length of 1 like so.

1
2
3
export const AuthorSchema = z.object({
    firstName: z.string().min(1)
});

Finally, we have a way to enforce that an author has a non-empty firstName!. Looking at the requirements, it seems like lastName is going to be similar, so let's update our AuthorSchema to include lastName.

1
2
3
4
export const AuthorSchema = z.object({
    firstName: z.string().min(1),
    lastName: z.string().min(1)
});

Hmmm, it looks like we have the same concept in multiple places, the idea of a non empty string. Let's refactor that to its own schema.

1
2
3
4
5
6
export const NonEmptyStringSchema = z.string().min(1);

export const AuthorSchema = z.object({
    firstName: NonEmptyStringSchema,
    lastName: NonEmptyStringSchema
});

Nice! We're almost done with Author, we need to implement middleName. Unlike the other properties, an author may not have a middle name. In this case, we're going to leverage the optional function from zod to signify that as so.

1
2
3
4
5
6
7
8
9
export const NonEmptyStringSchema = z.string().min(1);

export const AuthorSchema = z.object({
    firstName: NonEmptyStringSchema,
    lastName: NonEmptyStringSchema,
    // This would read that middleName may or not may be present. 
    // If it is, then it must be a string (could be empty)
    middleName: z.string().optional(), 
});

With the implementation of AuthorSchema, we can start working on the BookSchema.

Judging a Book By It's Cover

Since we have AuthorSchema, we can use that as our start as so:

1
2
3
export const BookSchema = z.object({
    author: AuthorSchema
});

We know that a book must have a non-empty title, so let's add that to our definition. Since it's a string that must have at least one character, we can reuse the NonEmptyStringSchema definition from before.

1
2
3
4
export const BookSchema = z.object({
    author: AuthorSchema,
    title: NonEmptyStringSchema
});

Putting a Price on Knowledge

With title in place, let's leave the string theory alone for a bit and look at numbers. In order for the bookstore to function, we've got sell books for some price. Let's use z.number() and add a price property.

1
2
3
4
5
export const BookSchema = z.object({
    author: AuthorSchema,
    title: NonEmptyStringSchema,
    price: z.number()
});

This works, however, z.number() will accept any number, which includes numbers like 0 and -5. While those values would be great for the customer, we can't run our business that way. So let's update our price to only include positive numbers, which can be accomplished by leveraging the positive function.

1
2
3
4
5
export const BookSchema = z.object({
    author: AuthorSchema,
    title: NonEmptyStringSchema,
    price: z.number().positive()
});

With price done, let's look at validating the genre.

Would You Say It's a Mystery or History?

Up to this point, all of our properties have been straightforward (simple strings and numbers). However, with genre, things get more complicated because it can only be one of a particular set of values. Thankfully, we can define a GenreSchema by using z.enum() like so.

export const GenreSchema = z.enum(["Fantasy", "History", "Mystery"]);

With this definition, a valid genre can only be fantasy, history, or mystery. Let's update our book definition to use this new schema.

1
2
3
4
5
6
export const BookSchema = z.object({
    author: AuthorSchema,
    title: NonEmptyStringSchema,
    price: z.number().positive(),
    genre: GenreSchema
});

Now, someone can't POST a book with a genre of "platypus" (though I'd enjoy reading such a book).

ID Please

Last, let's take a look at implementing the isbn property. This is interesting because ISBNs can be in one of two shapes: ISBN-10 (for books pre-2007) and ISBN-13 (all other books).

To make this problem easier, let's focus on the ISBN-10 format for now. A valid value will be in the form of #-###-#####-# (where # is a number). Now, you can take this a whole lot further, but we'll keep on the format.

Now, even though zod has built-in validators for emails, ips, and urls, there's not a built-in one for ISBNs. In these cases, we can use .refine to add our logic. But this is a good use case for a basic regular expression. Using regex101 as a guide, we end up with the following expression and schema for the ISBN.

const isbn10Regex = /^\d-\d{3}-\d{5}-\d/;
export const Isbn10Schema = z.string().regex(isbn10Regex);

Building onto that, an ISBN-13 is in a similar format, but has the form of ###-#-##-######-#. By tweaking our regex, we end up with the following:

const isbn13Regex = /^\d{3}-\d-\d{2}-\d{6}-\d/;
export const Isbn13Schema = z.string().regex(isbn13Regex);

When modeling types in TypeScript, I'd like to be able to do something like the following as this makes it clear that an ISBN can in one of these two shapes.

1
2
3
type Isbn10 = string;
type Isbn13 = string;
type Isbn = Isbn10 | Isbn13;

While we can't use the | operator, we can use the .or function from zod to have the following

1
2
3
4
5
6
const isbn10Regex = /^\d-\d{3}-\d{5}-\d/;
export const Isbn10Schema = z.string().regex(isbn10Regex);
const isbn13Regex = /^\d{3}-\d-\d{2}-\d{6}-\d/;
export const Isbn13Schema = z.string().regex(isbn13Regex);

export const IsbnSchema = Isbn10Schema.or(Isbn13Schema);

With the IsbnSchema in place, let's add it to BookSchema

1
2
3
4
5
6
7
export const BookSchema = z.object({
    author: AuthorSchema,
    title: NonEmptyStringSchema,
    price: z.number().positive(),
    genre: GenreSchema
    isbn: IsbnSchema
});

Getting Models for Free

Lastly, one of the cooler functions that zod supports is infer where if you pass it a schema, it can build out a type for you to use in your application.

export const BookSchema = z.object({
    author: AuthorSchema,
    title: NonEmptyStringSchema,
    price: z.number().positive(),
    genre: GenreSchema
    isbn: IsbnSchema
});

// TypeScript knows that Book must have an author (which has a firstName, lastName, and maybe a middleName)
// a title (string), a price (number), a genre (string), and an isbn (string).
export type Book = z.infer<typeof BookSchema>; 

Full Solution

Here's what the full solution looks like

const NonEmptyStringSchema = z.string().min(1);
const GenreSchema = z.enum(["Fantasy", "History", "Mystery"]);

export const AuthorSchema = z.object({
  firstName: NonEmptyString,
  lastName: NonEmptyString,
  middleName: z.string().optional(),
});

export const Isbn10Schema = z.string().regex(/^\d-\d{2}-\d{6}-\d/);
export const Isbn13Schema = z.string().regex(/^\d{3}-\d-\d{2}-\d{6}-\d/);
export const IsbnSchema = Isbn10Schema.or(Isbn13Schema);

export const BookSchema = z.object({
  title: NonEmptyString,
  author: AuthorSchema,
  price: z.number().positive(),
  genre: GenreSchema,
  isbn: IsbnSchema,
});

export type Book = z.infer<typeof BookSchema>;

With these schemas and models defined, we can leverage the safeParse function to see if our input is valid.

describe('when validating a book', () => {
    it("and the author is missing, then it's not valid", () => {
        const input = {title:"best book", price:200, genre:"History", isbn:"1-23-456789-0"}

        const result = BookSchema.safeParse(input);

        expect(result.success).toBe(false);
    });
    it("and all the fields are valid, then the book is valid", () => {
        const input = {
            title:"best book", 
            price:200, 
            genre:"History", 
            isbn:"1-23-456789-0", 
            author: {
                firstName:"Super", 
                middleName:"Cool", 
                lastName:"Author"
            }
        };

        const result = BookSchema.safeParse(input);

        expect(result.success).toBe(true);
        const book:Book = result.data as Book;
        // now we can start using properties from book
        expect(book.title).toBe("best book");
    });
});

Coaching Corner Volume 5

Welcome to Cameron's Coaching Corner, where we answer questions from readers about leadership, career, and software engineering.

In this post, we'll look at a question from TheRefsAlwaysWin about how to get a new engineer on their team to open up and get comfortable asking for help.

I've got a newish member to the team and they're still early in their career. They've got a good head on their shoulders, however, they tend to go down rabbit holes when problem solving and they don't speak up or ask for help.

They've asked me a few times about "how did I know ....", and a lot of the times, it's experience (I've been doing this for a few decades now).

How do I help them open up and ask more questions to the team and group?

Functional Foundations - Functions

When leveraging functional programming, you're not going to go far without functions. It's literally in the name.

In this post, let's take a deeper look at what functions are and some of the benefits we gain.

What is a Function?

When we talk about functions, we're not talking about a programming construct (like the function keyword), but instead we're talking about functions from mathematics.

As such, a function is a mapping from two sets such that for every element of the first set, it maps to a single element in the second set.

Words are cool, but pictures are better. So let's look at the mapping for the square function.

Mapping for the Square Function

In this example, we have an arrow coming from an element on the left where it maps to an element on the right. To read this image, we have a mapping called Square that maps all possible numbers to the set of positive numbers. So -3 maps to 9 (-3-3), 2 maps to 4 (22), so on and so forth.

To check if our mapping is a function, we need to check that every element on the left is mapped to a single element on the right. If so, then we've got a function!

Sounds easy, right? Let's take a look at a mapping that isn't a function.

Love in the Air?

When working with dates, it's common to figure out how many days are in the month. Not only does this help with billable days, but it also makes sure that we don't try to send invoices on May 32nd.

So let's take a look at a mapping from month to the number of days it has.

Broken Mapping for the Days In Month Function

Looking at the mapping, we can tell that January, March, May map to 31, April and June both map to 30. But take a look at February. It's got two arrows coming out of it, one to 28 and the other to 29. Because there are two arrows coming out, this mapping isn't a function. Let's try to implement this mapping in TypeScript.

type Month = "Jan" | "Feb" | "Mar" | "Apr"
           | "May" | "Jun" | "Jul" | "Aug"
           |"Sept" | "Oct" | "Nov" | "Dec";

type DaysInMonth = 28 | 29 | 30 | 31;

function getDaysInMonth(month: Month): DaysInMonth {
  switch (month) {
    case "Jan":
    case "Mar":
    case "May":
    case "Jul":
    case "Oct":
    case "Dec":
      return 31;

    case "Feb":
      // what should this be?

    case "Apr":
    case "Jun":
    case "Aug":
    case "Sept":
    case "Nov":
      return 30;
  }
}

We can't return 28 all the time (we'd be wrong 25% of the time) and we can't return 29 all the time (as we'd be wrong 75% of the time). So how do we know? We need to know something about the year. One approach would be to check if the current year is a leap year (algorithm).

function isLeapYear(): boolean {
  const year = new Date().getFullYear();
  if (year % 400 === 0) return true;
  if (year % 100 === 0) return false;
  if (year % 4 === 0) return true;
  return false;
}

// Updated switch
case 'Feb':
  return isLeapYear() ? 29 : 28;

The problem with this approach is that the determination of what to return isn't from the function's inputs, but outside state (in this case, time). So while this "works", you can get bit when you have tests that start failing when the calendar flips over because it assumed that February always had 28 days.

If we look at the type signature of isLeapYear, we can see that it takes in no inputs, but returns a boolean. How can that be possible except if it always returned a constant value? This is a clue that isLeapYear is not a function.

The better approach is to change our mapping to instead of taking just a month name, it takes two arguments, a monthName and year.

Fixed Mapping For Days In Month

With this new mapping, our implementation would look like the following:

function isLeapYear(year:number): boolean {
  if (year % 400 === 0) return true;
  if (year % 100 === 0) return false;
  if (year % 4 === 0) return true;
  return false;
}

function getDaysInMonth(month: Month, year:number): DaysInMonth {
  const isLeap = isLeapYear(year);
  switch (month) {
    case "Jan":
    case "Mar":
    case "May":
    case "Jul":
    case "Oct":
    case "Dec":
      return 31;

    case "Feb":
      return isLeap ? 29 : 28

    case "Apr":
    case "Jun":
    case "Aug":
    case "Sept":
    case "Nov":
      return 30;
  }
}

Benefits of Functions

Now that we've covered what functions are and aren't, let's cover some of the reasons why we prefer functions for our logic.

First, mappings help us make sure that we've covered all our bases. We saw in the getDaysInMonth function we found a bug for when the month was February. Mappings can also be great conversation tools with non-engineers as they're intuitive to understand and to explain.

Second, functions are simple to test. Since the result is based solely on inputs, they are great candidates for unit testing and require little to no mocking to write them. I don't know about you, but I like simple test cases that help us build confidence that our application is working as intended.

Third, we can combine functions to make bigger functions using composition. At a high level, composition says that if we have two functions f and g, we can write a new function, h which takes the output of f and feeds it as the input for g.

Sounds theoretical, but let's take a look at a real example.

In the Mars Rover kata, we end up building a basic console application that takes the input from the user (a string) and will need to convert it to the action that the rover takes.

In code, the logic looks like the following:

1
2
3
let rover:Rover = {x:0, y:0, direction:'North'};
const action = input.split('').map(convertStringToCommand).map(convertCommandToAction);
rover = action(rover);

The annoying part is that we're iterating the list twice (once for each map call), and it'd be nice to get it down to a single iteration. This is where composition helps.

When we're running the maps back-to-back, we're accomplish the following workflow

Input to Command to Action Mapping

Because each mapping is a function, we can compose the two into a new function, stringToActionConverter.

1
2
3
4
5
6
// using our f and g naming from earlier, convertString is f, convertCommand is g
const stringToActionConverter = (s:string)=>convertCommandToAction(convertStringToCommand(s));

let rover = {x:0, y:0, direction:'North'}
const action = input.split('').map(stringToActionConverter);
rover = action(rover);

Why Not Function All the Things?

Functions can greatly simplify our mental model as we don't have to keep track of state or other side effects. However, our applications typically deal with side affects (getting input from users, reading from files, interacting with databases) in order to do something useful. Because of this limitation, we strive to put all of our business rules into functions and keep the parts that interact with state as dumb as possible (that way we don't have to troubleshoot as much).

What I've found is that when working with applications, you end up with a workflow where you have input come in, gets processed, and then the result gets outputted.

Here's what an example workflow would look like

// Logic to determine the 'FizzBuzziness' of a number
function determineFizzBuzz(input:number): string {
  if (input % 15 === 0) return 'FizzBuzz';
  if (input % 3 === 0) return 'Fizz';
  if (input % 5 === 0) return 'Buzz';
  return `${input}`;
}

function workflow(): void {
  // Input Boundary
  var prompt = require('prompt-sync')();
  const input = prompt();

  // Business Rules
  const result = (+input) ? `${input} FizzBuzz value is ${determineFizzBuzz(+input)}` : `Invalid input`;

  // Output boundary
  console.log(result);
}

What's Next?

Now that we have a rough understanding of functions, we can start exploring what happens when things go wrong. For example, could there have been a cleaner way of implementing the business rules of our workflow?

Today I Learned - Primary Constructors

I've recently found myself picking up C# again for a project and even though much of my knowledge applies, I recently found the following and it took me a minute to figure out what's up.

Let's say that we're working on the Mars Rover kata and we've decided to model the Rover type as a class with three fields: x, y, and direction.

My normal approach to this problem would have been the following:

public enum Direction
{
  North, South, East, West
}

public class Rover
{
  private int _x;
  private int _y;
  private Direction _direction;

  public Rover(int x, int y, Direction direction)
  {
    _x = x;
    _y = y;
    _direction = Direction;
  }

  public void Print()
  {
    Console.WriteLine($"Rover is at ({_x}, {_y}) facing {_direction}");
  }
}

// Example usage

var rover = new Rover(10, 20, Direction.North);
rover.Print(); // Rover is at (10, 20) facing North

However, in the code I was working with, I saw the Rover definition as this

// note the params at the class line here
public class Rover(int x, int y, Direction direction)
{
  public void Print()
  {
    Console.WriteLine($"Rover is at ({x}, {y}) facing {direction}");
  }
}

// Example Usage
var rover = new Rover(10, 20, Direction.North);
rover.Print(); // Rover is at (10, 20) facing North

At first, I thought this was similar to the record syntax for holding onto data

public record Rover(int X, int Y, Direction Direction);

And it turns out, that it is! This feature is known as a primary constructor and when used with classes, it gives you some flexibility on how you want to access those inputs.

For example, in our second implementation of Rover, we're directly using x, y, and direction in the Print method.

However, let's say that we didn't want to use those properties directly (or if we need to set some state based on those inputs), then we could do the following.

public class Rover(int x, int y, Direction direction)
{
    private readonly bool _isFacingRightDirection = direction == Direction.North;
    public void Print()
    {
        if (_isFacingRightDirection)
        {
            Console.WriteLine("Rover is facing the correct direction!");
        }
        Console.WriteLine($"Rover is at ({x}, {y}) facing {direction}");
    }
}

After playing around this for a bit, I can see how this feature would be beneficial for classes that only store their constructor arguments for later usage.

Even though Records accomplish that better, you can't attach functionality to Records, but you can with classes, so it does provide better organization from that front.

That being said, I'm not 100% sure why we needed to add the primary constructor feature to the language as this now opens up multiple ways of setting up constructors. I'm all for giving developers choices, but this seems ripe for bike shedding where teams have to decide which approach to stick with.

Today I Learned: Destructure Objects in Function Signatures

When modeling types, one thing to keep in mind is to not leverage primitive types for your domain. This comes up when we use a primitive type (like a string) to represent core domain concepts (like a Social Security Number or a Phone Number).

Here's an example where it can become problematic:

// Definition for Customer
type Customer = {
  firstName: string,
  lastName: string,
  email: string,
  phoneNumber: string
}

// Function to send an email to customer about a new sale
async function sendEmailToCustomer(c:Customer): Promise<void> {
  const content = "Look at these deals!";

  // Uh oh, we're trying to send an email to a phone number...
  await sendEmail(c.phoneNumber, content);
}

async function sendEmail(email:string, content:string): Promise<void> {
  // logic to send email
}

There's a bug in this code, where we're trying to send an email to a phone number. Unfortunately, this code type checks and compiles, so we have to lean on other techniques (automated testing or code reviews) to discover the bug.

Since it's better to find issues earlier in the process, we can make this a compilation error by introducing a new type for Email since not all strings should be treated equally.

One approach we can do is to create a tagged union like the following:

1
2
3
4
type Email = {
  label:"Email",
  value:string
}

With this in place, we can change our sendEmail function to leverage the new Email type.

1
2
3
function sendEmail(email:Email, content:string): Promise<void> {
  // logic to send email
}

Now, when we get a compilation error when we try passing in a phoneNumber.

Compilation error that we can't pass a string to an email

One downside to this approach is that if you want to get the value from the Email type, you need to access it's value property. This can be a bit hard to read and keep track of.

1
2
3
4
function sendEmail(email:Email, content:string): Promise<void> {
  const address = email.value;
  // logic to send email
}

Leveraging Object Destructuring in Functions

One technique to avoid this is to use destructuring to get the individual properties. This allows us to "throw away" some properties and hold onto the ones we care about. For example, let's say that we wanted only the phoneNumber from a Customer. We could get that with an assignment like the following:

1
2
3
4
5
6
7
8
const customer: Customer = {
  firstName: "Cameron",
  lastName: "Presley",
  phoneNumber: "555-5555",
  email: {label:"Email", value:"Cameron@domain.com"}
}

const {phoneNumber} = customer; // phoneNumber will be "555-555"

This works fine for assignments, but it'd be nice to have this at a function level. Thankfully, we can do that like so:

1
2
3
4
5
// value is the property from Email, we don't have the label to deal with
function sendEmail({value}:Email, content:string): Promise<void> {
  const address = value; // note that we don't have to do .value here
  // logic to send email
}

If you find yourself using domain types like this, then this is a handy tool to have in your toolbox.

Today I Learned - Effective Pairing with Mob.sh

As someone who enjoys leveraging technology and teaching, I'm always interested in ways to simplify the teaching process.

For example, when I'm teaching someone a new skill, I follow the "show one, do one, lead one" approach and my tool of choice for the longest time was LiveShare by Microsoft.

Using VS LiveShare

I think this extension is pretty slick as it allows you to have multiple collaborators, the latency is quite low, and it's built into both Visual Studio Code (VS Code) and Visual Studio.

Drawbacks to LiveShare

Editor Lock-In

First, participants have to be using Visual Studio or VS Code. Since there's support for VS Code, this isn't quite a blocker as it could be. However, let's say that I'm wanting to work with a team on a Java application. They're more likely to be using IntelliJ or Eclipse as their editor and I don't want someone to have to change their editor just to collaborate.

Security Concerns

Second, there are some security considerations to be aware of.

Given the nature of LiveShare, collaborators either connect to your machine (peer-to-peer) or they go through a relay in Azure. Companies that are sensitive to where traffic is routed to won't allow the Azure relay option and given the issues with the URL creation (see next section), the peer-to-peer connection isn't much better.

To start a session, LiveShare generates a URL that the owner would share with their collaborators. As of today, there's no way to limit who can access that link. The owner has some moderator tools to block people, but there's not a way to stop anyone from joining who doesn't have the right kind of email address for example.

Introducing Mob.sh

While pairing with a colleague, he introduced me to an alternative tool, mob.sh

At first, I was a bit skeptical of this tooling as I enjoyed the ease of use that I got with LiveShare. However, after a few sessions, I find that this tool solves the problems that I was using LiveShare for just as good, if not better.

How It Works

At a high level, mob.sh is a command line tool that is a wrapper around basic git commands.

Because of this design choice, it doesn't matter what editor that a participant has, as long as the code under question is under git source control, the tooling works.

Let's explore how a pair, Adam and Brittany, would use this tool for work.

Adam and Brittany Start Pairing

Adam is looking to solve a logic issue in an AWS lambda could use Brittany's guidance since he's new to that domain.

Adam creates a new feature branch, fixing-logic-issue and starts a new mobbing session.

1
2
3
git switch -c fixing-logic-issue
mob start --create
# --create is needed because fixing-logic issue is not on the server yet

Under the hood, mob.sh has created a new branch off of fixing-logic-issue called mob/fixing-logic-issue. While Adam is making changes, they're going to occur on the mob/fixing-logic-issue.

Because the pair is working remotely, Adam shares his screen so that they're on the same page.

While on this branch, Adam writes a failing unit test that exposes the logic issue that he's running into. From here he signals that Brittany is up by running mob next

mob next

By running this command, mob.sh adds and commits all the changes made on this branch and pushes them up to the server. Once this command completes, it's Brittany's turn to lead.

Once Brittany see's the mob next command complete, she checks out the fixing-logic-issue branch and picks up the next portion of the work by running mob start

1
2
3
git pull # To get fixing-logic-issue branch
git switch fixing-logic-issue
mob start

Because she was on the fixing-logic-issue branch, mob.sh was able to see that there was a mob/fixing-logic-issue branch already, so that branch is checked out.

Based on the test, Brittany shows Adam where the failure is occurring and they write up a fix that passes the failing test.

Though there are more changes to be done, Brittany has a meeting to attend, so she ends the session by running mob done, committing, and then pushing the changes.

1
2
3
mob done
git commit -m "Fixed first logic bug"
git push

By running mob done command, all the changes that were on the mob/fixing-logic-issue are applied to the fixing-logic-issue branch. From here, Brittany can commit the changes and push them back to the server.

Wrapping Up

If you're looking to expand your pairing/mobbing toolkit, I recommend giving mob.sh a try. Not only is the initial investment small, but I find the tooling natural to pick up after a few tries and since it's a wrapper around Git, it reduces the amount of learning needed before getting started.

Running Effective Experiments With the Team

As a leader, you're always on the look out for new tools and approaches to help the team be more effective.

But what happens when you have an idea? How do you introduce it to the team and get buy-in? How do you encourage others to propose ideas as well (remember, you're job isn't to have all the ideas, but to encourage and choose the best ones).

Let's say that the idea works, what happens next? What if it failed, what do you do next? How do you share your lessons with others?

In this post, I'll walk you through my approach for running experiments with the team and how to answer these questions. Like any other advice, I've found success using this process, but you might find that you'll need to tweak or adjust for your team.

Working In the Open

When it comes to the team, I'm a big proponent of working in the open. Not only does this reduce the amount of questions from my leader about what we're doing, it also empowers others to chime in when they see something off or the team going down the wrong path.

With this philosophy in mind, I document our experiments in the team wiki. Now, I know that we should favor people over processes, however, I have found immense value in taking the 10 minutes to document as this helps get everyone on the same page and when we look at these experiments later, we have the context behind the experiment.

To me, this no different than a scientist writing down their experiments for later reference.

Defining an Experiment

As you might have guessed, I'm a big fan of using the scientific method for engineering work and especially so when it comes to experiments. As such, I capture the following info:

Purple liquid being injected into one of many test tubes
Seriously, if we're not taking notes, what kind of scientists are we?
Photo by Louis Reed on Unsplash
  • Context - Why are we doing this? What inspired the experiment or what problem are we trying to solve?
  • Hypothesis - What change are we proposing and what outcome are we looking for?
  • Implementation - How are we going to run this experiment?
  • Duration - How long are we going to run this experiment for?
  • (Optional) Immediate Failure Criteria - Is there anything that could happen during this experiment that would cause to immediately stop?

For those looking for a template, you can find a markdown version in my Leadership Toolkit on GitHub

Scheduling the Retrospective

With the experiment documented, I send a meeting request the day after the experiment is scheduled to end. The goal of this meeting is to reflect on the experiment and to decide whether we should adopt the changes or to stop.

Leading the Team

After sending this meeting, my job is to help the team implement the experiment and coach/encourage as needed. Since it's a process change, it might take a bit for the team to adjust, so showing some patience and understanding is critical here.

While we are going the through the experiment, I'm going to note any changes that I'm noticing. For example, if we're running an experiment to have asynchronous stand-ups, I'm going to take notes on how I'm feeling about the team getting updates and how they're communicating with each other.

Depending on what comes up in our one-on-ones, I might even use this as a starter question.

Retrospective

Once the experiment has ran its course, it's time to reflect on the experiment and decide as a team on whether to adopt the changes or reject them.

To prepare, I'd recommend getting the right people in the room and setting the context.

During the retro, the team should be doing the majority of the talking. Your role is to seed the conversation and make sure everyone gets their opinions out. I like to capture these notes on a board so that the team has clear visibility on what worked and didn't work.

Once the notes have been added to the board, it's time for the team to decide to adopt the changes or not. During this step, I remind the team that this process isn't set in stone and if we want to tweak it in a future experiment, that's normal and encouraged.

At this point, I update the experiment write-up that we did earlier with the team decision and the logic behind the decision. This provides an easy way of sharing our lessons with others.

Sharing Outcomes With Others

One cool thing about leading teams is that no two teams are the same. Between the personalities, skills, company culture, and motivations, what works for one team won't work for another team (and the other way around).

Because of this, it's critical to share your results with your leader and your peers. This way, they could see what we did, what worked, what didn't work, and hopefully get inspired to run their own experiments with teams.

If the team paid a learning tax for an experiment, why wouldn't we share those results with others so that they can learn from our experiences? They might be able to make suggestions to turn a failure into a success or to ask questions about how we dealt with an issue.

The group being successful is your success, do don't hoard knowledge, share it with others!

With the write-up completed, you can start simply by sending a link to the group. A better approach would be to have a standing agenda item for your team lead meeting where leaders can talk about experiments that have been ran recently and their outcomes.

Common Mistakes

When I've worked with leaders to introduce experiments, it can be a lot to take in because this is a different way of thinking. This is especially true if leaders are not in a psychologically safe environment or if they have priori experiences that weren't successful.

I can't guarantee that you'll run experiments flawlessly, however, if you avoid these common mistakes, your odds of success will be higher.

Not Time Boxing the Experiment

One of the key features of the experiment is that it's only going to run for a set period of time, so that if you find that it's not working, you've not permanently impacted the team.

If you have an experiment that's going to run into perpetuity, that's not an experiment anymore, that's a process change and that shouldn't go through this workflow because experiments can be abandoned, but process changes typically can't.

Treating Experiments as Foregone Conclusions

At some point, you're going to get a directive from your leader that you don't agree with, but you need to commit to the idea anyway. You know the idea isn't going to go over with the team, so you think framing it as an experiment can soften the blow.

DON'T DO THIS!

Jimmy Fallon saying It's Totally Not Worth It

Really, don't do this!

Experiments are just that, experiments. They are not a vehicle for you to make unpopular changes. If you use experiments for slipping in these types of changes, then the team will learn that experiments is code for "not great idea" and they'll stop using the process.

Remember, experiments are ideas that you and the team come up with to make things better, not directives from the top coming down.

Now, you could use an experiment to figure out a way to carry out the directive. A good leader tells you where we have to go, but not necessarily how to get there. The experiment could be to figure out how to get there, but not what the destination should be.

Running Multiple Experiments

When getting a new team or after identifying multiple areas that a team could improve in, it's going to be tempting to want to implement multiple changes at once.

Resist the urge.

Remember, an experiment, by definition, is a process change. So the more experiments you run, the more process changes happening, which in turn puts more stress on the team to remember all the changes.

In addition to all the process changes, you might find that one experiment futzes with another experiment and you may not get clear results.

Let's say that we had two experiments going on at the same, asynchronous stand-ups and spending Tuesday afternoons in independent learning. During your one-on-ones, you get some feedback that it's a bit odd to not know what other team members are working on.

What's driving that? Is it the async stand-ups? Or is it the dedicated learning time? Could it be both? You can't be sure.

Another way to think about this is to think about debugging a program. If something's not working, do you change 5 things at once? No, you're going to change one thing, re-run, and see what happens.

Same thing for experiments.

But Cameron! This team is a hot mess and could stand to improve in so many areas, what should I do then?

Instead of running all the experiments, instead, the team should decide which experiment would have the biggest payoff and then pursue that one. Remember, you're not playing the short game, but you're in for the long haul, so you'll have the time to make those changes.