When Good Design Goes Bad: The Kong Performance Bottleneck with Cassandra

Most of the evil in this world is done by people with good intentions.

T.S. Eliot

I decided to start a series where I talk about how good design choices can result to bad experiences. I do my best to give an objective look into various architectural decisions and engineering problems, and how to avoid some of the most common pitfalls.

At work, we heavily rely on Kong for our API Gateway. This piece of infrastructure handles all of the ingress and egress for our primary production Kubernetes cluster. It serves multiple functions, and handles about 6,000 to 10,000 requests per second. We use Cassandra for the data storage, as it is the recommended data store for Kong at high volume. And in 2 years, our Cassandra cluster has grown significantly: from a 6 node, single data center cluster, to 2 data centers with 24 nodes each.

At the time of writing this, we are using Kong v2.0.1. Based on the changelog, there have been enhancements, so your experience may vary.

We encountered an issue where one of our services, when it hits 150 requests per second, would throw Kong into an unstable state. This particular service uses OAuth2 tokens, but the tokens are not generated by Kong. It is generated by another system, and saved by our service to Kong via the official OAuth 2.0 plugin’s migrate token API. Succeeding requests use the token to access other services protected by Kong.

Saving a token has an average latency of 5 seconds. For a regular user, this is imperceptible (since the process is asynchronous), but on days with heavy traffic, this 5 second latency can extend to as much as 30 seconds, and can already hamper the experience of our users (especially the savvy ones).

Once we hit the 150 requests per second mark on the said service, we started seeing a progression of problems:

  1. Our service endpoints that pass thru Kong starts throwing 404’s.
  2. We would see an increase in invalid JSON responses due to a null value.
  3. Finally, we would see timeouts coming from the /oauth2_tokens endpoint. In the Kong logs, we would see that connections to our Cassandra nodes are timing out.
  4. Multiple service outage (ouch)

And on bad days, it would overload some of our Cassandra nodes, which will trigger a restart of the node. At this point, we had no choice but to issue a rolling restart on all Kong nodes.

Other symptoms include increasing CPU and memory utilization of Kong nodes over time, to the point where we threw 16 cores per node just so we won’t max out on CPU utilization. Our Cassandra data centers were running on 8 core machines with 64GB of RAM, and uses NVME’s for maximum performance. However, the CPU utilization of all nodes were at 90-95%, with a typical 15 minute load average of 40 per node.

Despite the capacity upgrades on both Kong and Cassandra, we were seeing very little to no incremental improvement. At some point, it got worse. Due to the constant failures of our Cassandra nodes, we tried to improve overall reliability by increasing the replication factor from 3 to 7, and increased the total number of nodes to 40. With a consistency level of QUORUM, we should survive 3 node failures. Instead of getting better results, we still encountered issues at the same RPS.

Something was fundamentally wrong. We needed to go deeper. And so we cloned the source code of Kong to see what was going on.

And we found our culprit…

Architecture of Kong in a nutshell. And the source of all our headaches.

As it turns out, Kong uses a data access layer that maps queries to either Cassandra or PostgreSQL in a one-to-one fashion. While the intention is good, this became its fundamental flaw. A Good Design Gone Bad.

To understand why, we need to look deeper.

The very first public version of Kong was built with Cassandra as its data store. Support for PostgreSQL came in version 0.8.0, more than a year after it was first launched. To achieve this, a data access layer was implemented.

From a design standpoint, having a data access layer allows an application to have its core logic and data store logic to be loosely coupled. This gives the flexibility to use different data storage solutions while retaining a common data model. Most modern web frameworks will have this concept baked in for a developer to follow. A good example is Hibernate, an ORM framework designed to allow working with different SQL databases using the same data model. Hibernate is used on top of other frameworks, like Spring. Hibernate works because it allows developers to work with objects without caring too much about the underlying SQL, and the necessary optimizations for a specific database like MySQL or PostgreSQL.

Kong’s data access layer, on the surface, follow this same principle. It allows developers to work with the same data model regardless of whether they are using Cassandra or PostgreSQL. While similar in concept to Hibernate, they are fundamentally different due to one thing: the data store type they manage. Hibernate’s focus is to abstract Relational Databases only (while taking into account the various intricacies of each database), while Kong’s focus is to simply do a 1:1 mapping between a Relational and NoSQL database.

Directly mapping between a SQL and NoSQL data store is a no-no. The two data store types have different use cases, and performance characteristics. Regardless of how similar they are.

Looks like a duck. Quacks like a duck. IT’S A BUS!

Cassandra, being a wide column store, shares some structural similarities to an SQL database. You can practically get an existing table from almost any Relational Database, with all of its indices (minus the foreign keys), and represent it in Cassandra in an almost 1:1 manner. The reverse applies.

In fact, you could even say that PostgreSQL and Cassandra are distant relatives, due to the similarities in supported data types that you would not normally see in other Relational Databases (e.g. both support JSON and arrays as column types).

Unfortunately, the similarities end here.

Our issues with Kong and Cassandra stem from how the data was modeled, and how the indices are used today.

You see, when you declare an index in a Relational Database, there is really no difference whether you index a primary key, a regular column, or a unique key. All of those will use the indexing algorithm you specify. In Cassandra, an index on a non-primary column is called a secondary index. Functionally, they act almost the same as a Relational Database index. However, when you query by a secondary index (without specifying the primary key), it will query all nodes to look for the data. This is not a problem for indexes with low cardinality, but for a table with high cardinality (e.g. the oauth2_tokens table), trying to look for an access token without using the primary key puts unnecessary strain on the entire cluster.

There have been several write ups about this. I recommend reading thru Marcus Cavalcanti’s horror story.

The Root Cause and Fix

In the end, because Kong performs an almost 1:1 query mapping between Cassandra and PostgreSQL, the generated queries are suitable and highly performant for a Relational Database, but is not suited for Cassandra.

Once we figured this out, we had 3 options:

  1. Denormalize the Kong tables – Cassandra works best with a denormalized schema, and this would allow us to structure the data in such a way that we avoid the secondary index problem altogether. However, we would need to perform a significant overhaul of the common data structures. This means we have to fork the project.
  2. Write a custom OAuth2 plugin optimized for Cassandra – this would allow us to isolate the needed changes to OAuth2 only. However, simply writing or forking of the existing OAuth2 plugin will require us to duplicate core components and data structures, which will increase overall complexity and testing (and time is not on our side).
  3. Switch to PostgreSQL – Since the schema for Cassandra and PostgreSQL are 99% match (minus a few fields), we can migrate the data using a custom script, and have it fully functional within a few minutes.

I wanted to go with option 1. But the clock was ticking for all of us. Option 3 became the obvious choice.

My team built a tool for migrating data from one Cassandra cluster to another. We modified that to migrate data to PostgreSQL. In just a couple of hours, we were able to migrate the data (except the tokens), and reconfigure Kong to use our PostgreSQL cluster without any fuss.

The results speak for themselves.

Kong on Cassandra. Most of the traffic has a latency of > 4 seconds, with a consistent P99 of 10 seconds
Kong on PostgreSQL. Most of the traffic has a latency of 20ms, with the highest P99 at 380ms

Best of all, most of the problems went away. We are still seeing issues at P99, which usually translates to 1 out of 10000 requests. We have been able to reduce this further to almost 0, but that is another article for another day.

The move to PostgreSQL meant that we can reduce infrastructure costs significantly. From 40 servers down to 2, with far better performance and reliability.

The Lesson

There are two key items to note here.

First, when designing systems, using abstraction layers to hide the complexities of dealing with the lower layers can be a good thing, provided that it takes into account the complexities to begin with. Hibernate, for example, deals with the complexities of the underlying database by requiring the use of an appropriate dialect. Simply building an abstraction for the sake of it would often times result in unforeseen consequences.

Second, when building products, simplicity is king. If you don’t need to do it, don’t. If there is peer or market pressure, but it doesn’t make sense, don’t. The case presented here could have been avoided if Kong stuck to its guns and optimized itself purely for Cassandra. Yes, running Cassandra is no walk in the park. But with today’s advancements, and a good selection of managed services like Instaclustr, Datastax, or AWS, that concern becomes moot.

The Need for Improved Ad Blocking for A Better Digital Experience

The ad industry thinks their clients are their customers. They think the companies who pay for the production are the ones they are supposed to serve. So the ads they produce make their clients happy…but infuriate the rest of us.

Simon Sinek

Nowadays, we are constantly bombarded with ads everywhere. It has gotten worse when smartphones became popular. To top it off, companies want to track us so they can provide targeted offers (a.k.a. the segment-of-one marketing). It is not necessarily bad, but as they say, all it takes is one rotten apple.

305 ads in one page is just plain excessive and irresponsible.

Out of sheer boredom lately, I started playing with an old Raspberry Pi 2 I have lying around. It used to serve as my time machine backup, but when the hard drive attached to it got corrupted, it just sat there, gathering dust. I saw a friend of mine post stats from his pi-hole and how many ads were blocked, and decided to see if I can implement the same on my home network.

If you’re not aware, a Raspberry Pi is a small computer, that’s roughly the size of a credit card. At $35, its main use case is for educational purposes. Due to improvements in the hardware, it has gotten to a point that it can be used as a server, a desktop replacement (if you’re just a casual user), or a retro gaming machine.

After getting pi-hole running and adding a few more blocklists (thank goodness for Firebog), I turned off my main router’s DHCP capabilities to allow the pi-hole to take over as both the DHCP server and the DNS server.

After just a few hours, my ad and tracking footprint has been reduced by more than 25%.

No AdblockPlus? No problem!

Here is what I noticed:

  1. Ads and trackers are in your smart devices too. – My smart TV comes with Netflix and YouTube, and it bogs down after several hours of constant use of either app, requiring us to do a hard reset of the TV. I always thought that the TV manufacturer did a shitty job. Things changed when the pi-hole started blocking the ads and trackers from both apps. My TV no longer requires a hard reset. Responsiveness also improved.
  2. Old devices got faster. – I have an iPad 3rd gen, running iOS 9. It is horrendously slow after the last upgrade, and it got tossed aside for the past 4 years. All that changed when the ads and trackers were blocked. I could use Safari again. Firefox is still slow, but much more usable.
  3. You can identify what devices on your network has the most bloatware. – My mother-in-law uses an old Android phone, and it has so much crap. We even thought that the battery is already worn out. After implementing pi-hole on the network, I was able to stop traffic from both bloatware and malware. The nice side effect is the battery life improved.

Companies need to rethink their strategies on how to implement ads and tracking. Every time an ad is loaded or a tracker is invoked, it eats up a user’s internet data plan (especially mobile users), and precious device resources. In extreme cases, it can cause a device to stall, which most people mistake for bad hardware. Overall, this translates to bad user experiences and lost opportunities.

While most people do not have the technical know-how to implement DNS ad blocking in their respective homes or mobile devices, it is just a matter of time before someone turns this into a multi-million dollar business. VPN service providers are already poised to turn ad blocking into a premium service, which results to bandwidth savings for them and their customers. And with the push for higher levels of privacy, DNS ad blocking may become part of the new normal.

The Paradox of Perfection

Perfection is not attainable, but if we chase perfection we can catch excellence.

Vince Lombardi

I’m a goddamn perfectionist. I’ve been like this as far as I can recall. I always demand the highest quality (translation: never enough) of work from myself and other people, to the point that it can be very toxic to work with me. And at times, I’d rather work by myself than with others because people have a hard time catching up. It has toned down over the years, but I need to be conscious when it becomes out of control.

Perfectionism has an associated stigma to it. And I can’t blame people why they put a bad label to it.

Perfection is a myth. No matter how much you struggle, there will always be something better.

Perfectionists are procrastinators. In writing the next article, I’ve actually written about four different topics before settling with this one (and the other four are still drafts).

Finally, perfectionists are hard on themselves. We self-criticize, thinking that everything we do is not good enough.

All these traits create a vicious cycle. It damages us internally due to the constant stress it generates. It damages relationships, because we tend to expect the same from friends and colleagues.

I am here to tell you that being a perfectionist is not as bad as what people think. I have embraced my perfectionist nature a long time ago, and I do my best to make it work for me. Rather than changing who I am, I supplement it, thus staying true to who I am as a person. Not everything is a disorder. We just look at things differently.

How does one embrace the paradox of perfectionism? Here are my methods that you can adopt:

Perfection is unattainable. You can only come close to it. CTTO

Accept that you cannot reach perfection. You can only come close to it. – Perfection and reality have an asymptotic relationship. A perfect square cannot be achieved without measuring at the subatomic level. Understand what is achievable with the knowledge and tools that you have, rather than stressing yourself over what you don’t have.

The bigger picture is as important as the details. – As perfectionists, we obsess with the details. T h e spa cing of thi s sent e n ce is p robab ly going to dri ve s ome of y ou nu ts (and my OC nature is trying to compel me to edit the damn thing). Good craftsmanship requires looking at every detail, but it should be cohesive and contribute to the overall goal. If you’re expending a lot of effort for little return, you may need to step back and look at the bigger picture.

Not everyone cares about the same things as you. And that’s OK. – In my first job, when a manual says you weigh 1.000 grams, I ensure I get 1.000 grams. It matters a lot to me, as that is part of my training as a chemist. That will not matter to other people, and that’s OK. It is important to be flexible when your demands are not meant (and that’s 99.999% of the time). Doing so allows you to work better with people, and you can even go as far as scaling your life.

Be prepared to iterate. A lot. – Perfection is not possible the first time around. It requires dedication and perseverance. Embrace failure. It’s OK to not get it right the first few times. Strive for incremental improvements. This helps build a growth mindset.

Perfectionism should not be treated like some disorder. Being a perfectionist pushes you to become a better version of yourself every time, as long as you keep your foot on the ground.

I Settled on WordPress Despite Being a PHP Hater

Those who know me personally or have worked with me professionally know I am very opinionated when it comes to tech, and I don’t sugarcoat it. So, it may come as a surprise when I launched this site, which is running in wordpress.com. Before you go bashing me for being a hypocrite, let me tell you my story about PHP, and why I made the shocking choice.

I have been programming since I was 14 years old. I started with BASIC. By 16 I was doing assembly (and it made me ask why the fuck I was torturing myself). At the age of 23, I joined a marketing company with a loyalty platform as its flagship product. It was my first official job in the IT industry, as a PHP developer.

Yes folks, I was a PHP developer.

In the one year I worked with PHP, I understood why many people love it and stick to it. It just works. You just need to have a good grasp of the flow, and it was “dead simple” to translate that into a working site. But in reality, it is far from simple. And it will only make you closer to becoming brain dead.

I will not delve into my hatred for PHP any further, as this topic has been beaten to death and beyond.

Everything wrong with PHP in one photo. Source

So, I hate PHP. I do not trust systems built on PHP. I have rescued numerous compromised PHP-based systems (including sites built on WordPress). But why am I trusting a site built on PHP, let alone WordPress? I am simply scaling my life.

Building a website isn’t easy, especially if it is exposed to the public. Over the past 10 years, the number of things to consider to ensure that a site is working and secure has exploded. At a high level, these are the things you need to consider:

  1. PHP Configuration and Tuning
  2. Web server tuning
  3. Database tuning
  4. Network topology
  5. Security
  6. Caching

…And the list goes on.

Now, I could have done all of this myself. After all, I have the working experience to deal with WordPress. Or better, I could revive my old blogging app. But I didn’t.

You see, the most valuable resource that we have is time. I could either invest time to build my new site (which is totally doable even with my crazy schedule), or go with an established platform, and have more time for my family. At this point in my life, I went with the latter.

From a tech perspective, I am not a PHP guy (despite my background). I don’t even want to touch it with a 10-foot pole. And most of my time will be devoted to security work, which is a never-ending race. I would rather let the real experts in the platform deal with this than me.

From a business perspective, how much is your time? I could get a VPS from Digital Ocean at $10/month, spend 40 hours for the initial build of the site, and another 20 hours for security before I write my first post. That, or pay a subscription to wordpress.com for roughly the same price, spend the first 20 hours for the initial setup, and then start publishing.

At the end of the day it all boils down to what brings value to you. Both approaches have their pros and cons, and I have used both in various situations. For this site, I chose to swallow my pride and value my time, rather than be the perfectionist tech guy who spends hundreds of hours tweaking.

A Lesson in Scaling Your Life

“…there’s no I in ‘team’. There is a me, though, if you jumble it up.”

Dr. Gregory House

For a huge part of my career, I have been put in situations where I needed to be that one guy to solve everyone’s problems. Many of my projects in the past are rescue projects. If you are a developer, you know that rescue projects are the shittiest kind of projects you can get into. You need to deal with other people’s broken code. You are given a tight (and sometimes impossible) deadline. And you’re expected to be the expert.

While it may be the shittiest job any developer could get, it is also the most rewarding. Thanks to those experiences, I was able to learn how things work at a level of detail that most wouldn’t dare to go to, in the shortest amount of time. Have you ever constructed an HTTP Request by hand? Have you ever experienced a bug caused by your favorite framework (I’m looking at you Spring and Grails)? How about rescuing a malware-infested linux server, or your client will go out of business in 3 days?

Eventually, I became a one-man-army. I can start a project, design it, build the infrastructure, create the UX, write the software (complete with unit tests), and deploy it into production with little to no bugs. Without any help. Just me, my wits, perseverance, and a crap ton of coffee.

I was very proud of my achievements and what I can do. There was no Team. There was Me.

There is a big downside to this. As I moved to bigger roles, the amount of time I spend as a tech guy started becoming less. I had to deal with the realities of running a business. From commercials, managing people, and dealing with various stakeholders, my time coding has been replaced by meetings, more meetings, and even MORE MEETINGS. I used to hate Monday. Now, I hate Friday (but that’s a story for another day).

The things that made me successful in this career became my biggest weakness.

The pressure to get things done at work. The responsibilities of being a husband and a father. I reached that point where I needed to scale myself. Literally. And thankfully, with experience, trial, and errors, I’ve been able to do so. Little by little.

So, how does one scale? With help from people. Because of my nature to be a one-man-army, it was a big transition for me from being the one providing help to be the one asking for help. If you are not the type to ask for help, you will be surprised at how people can be generous. And by asking for the right kind of help, you create a multiplier effect. Something that could have taken me months to get done can be built in a short amount of time.

What’s great about this is the multiplier effect. When you ask help from people, they too, will ask for help from other people. This can turn a ripple in a pond into a tidal wave.

Thanks to this ripple effect, in a way, I have been able to go back to my roots and do what I do best: dealing with tech. I still need to attend meetings every now and then. I still have to do my administrative tasks. The difference: I can count on my team to do the right thing.

Hello World

Every programmer starts their journey by writing a program that writes “Hello World” on the screen.

While it looks so simple, it is considered a rite of passage. Even veteran programmers, when they start learning a new language, always start with a simple “Hello World” program.

Blogging isn’t new to me. I have done my fair share of writing in the past. And now, I’ve decided to go back to the journey again.

Why restart writing? Share the wealth. Not money, but knowledge. And have a few laughs while we’re at it.

So…come on down stop on by.

Have a coffee and light.

To another programmer’s life.

(I’m pretty sure you sang it in your head)