A junior, a middle and a senior dev walk into a bar and they order an ALTER table

A story about the enormously destructive power of this thing ?*

Photo by Dan Gold on Unsplash

Gather ’round and let me tell you about that time my teammates and I took down one of the platform’s most critical services. For the sake of narrative, I’ll name my coworkers S (senior) and M(middle), but I changed the dates and some of the technical details in an effort to keep it as anonymous as possible.

T’was a lovely Thursday afternoon at the end of July, 2016. Weather was milder than the days before and my teammates and I had just moved into a new space in the office. Life was mostly good. The week before had passed without the presence of our most senior dev, S, but me and M had managed to complete some tasks on our own.

Those changes had some pretty heavy stuff (new database fields, new endpoints) so we decided to wait for S‘s review before we merged them.

Database changes and migration scripts

In the meantime another dev reviewed the changes and mentioned that the operations in our migration scripts could be a lot of work for the database server and may keep it locked for a while. So he suggested some alternatives.

If you don’t know what a database migration is, it’s a series of SQL scripts that?—?when executed sequentially?—?create the whole database schema from scratch (tables, columns, default values, constraints, etc.). Usually they run on every deployment, but only if there’s something new to apply––that is, a new migration file.

We didn’t want to run our change as a migration because then it would be executed by our continuous integration system and completely out of our control. We had to run it manually. So first we sat down with a replica of the production database and started playing around with queries trying to find the most efficient alternative.

This was a fun while, the 3 of us trying to guess what would happen when we ran each script line separately. Would it take long? Or run instantly? And why? Where should we set the default value to prevent new entries from breaking it?

Showtime!

It was 5 PM and we were finally happy with our query. It did everything in the right order and didn’t really lock down for very long. We sent a ticket to the infrastructure team to run it directly on our servers, since obviously we didn’t have credentials for that. You’ll see exactly why in just a moment.

Infra was really diligent about this. They’re a busy bunch, it’s okay if they take a while to do stuff. But not this time, this time the ticket was closed before I could blink twice.

Everything seemed ok. It was almost the end of the day so we split into our own desks and finished some tasks.

Some minutes later, a member of another team came by and asked me if we were changing stuff on that feature because it was throwing some errors. This hit me like an ice-bucket-challenge:

Oh no, I mean, yes… we f***ed it up.

Can you guess what happened? I know it’s harder to find omissions than to find mistakes, but I think you could be able to know what we needed to do (and didn’t) without knowing the service, the feature, or the query.

If you do, scroll to the bottom and comment it, I’ll trust you’re not peeping below this photo ?

Photo by ? ?? on Unsplash

Alright, if you have no idea what could have happened, that’s OK too. I feel I should have known, because this happened locally during development like a million times. And I had been starting a very similar feature on my own that day and dealing with the same error.

Ready for it?

Posted by Web Monkey