Search⌘ K
AI Features

The Principle of Least Privilege

Explore the principle of least privilege and its critical role in securing APIs within distributed systems. Understand best practices for minimizing privileges to reduce risks, including avoiding root-level access, isolating applications, securing communication, and managing container vulnerabilities. This lesson equips you to design more secure software that limits damage from compromised credentials and malicious input.

Protecting APIs

The final entry in the Top 10 is also a newcomer to the list. The rise of REST and rich clients elevated APIs to a primary architectural concern. For some companies, the API is their entire product. It’s essential to make sure that APIs are not misused.

Security scanners have been slow to tackle APIs. In part, this is because there’s no standard metadata description about how an API should work. That makes it hard for a testing tool to glean any information about it. After all, if we can’t tell how it should work, how do we know when it’s broken?

To make things even harder, APIs are meant to be used by programs. Well, attack tools are also programs. If an attack tool presents the right credentials and access tokens, it’s indistinguishable from a legitimate user.

There are several keys to defense. The first is a kind of bulkheading (see Bulkheads). If one customer’s credentials are stolen, that’s bad. If the attacker can use those to get other customers’ data, that’s catastrophic. APIs must ensure that malicious requests cannot access data the original user would not be able to see. That sounds easy, but it’s trickier than we might think. For instance, our API absolutely cannot use hyperlinks as a security measure. In other words, our API may generate a link to a resource as a way to tell that resource that it has access. But we should assume that the client is going to click that link. It may issue 10,000 requests to figure out our URL templating pattern and then generate requests for every possible user ID. The upshot is that the API has to authorize the link on the way out and then reauthorize the request that comes back in.

Second, your API should use the most secure means available to communicate. For public-facing APIs this means TLS. Be sure to configure it to reject protocol downgrades. Also keep the root certificate authority (CA) files up-to-date. Bad actors compromise certificates way more often than you might think. For business-to-business APIs, we might want to use bidirectional certificates so each end verifies the other. Third, whatever data parser we use, be it JSON, YAML, XML, Transit, EDN, Avro, Protobufs, or Morse code, make sure the parser is hardened against malicious input. Use a generative testing library to feed it tons and tons of bogus input to make sure it rejects the input or fails in a safe way.

Fuzz-testing APIs is especially important because by their nature, they respond as quickly as possible to as many requests as possible. That makes them savory targets for automated crackers.

The principle of least privilege?

The principle of least privilege mandates that a process should have the lowest level of privilege needed to accomplish its task. This never includes running as root (UNIX/Linux) or administrator (Windows). Anything application services need to do, they should do as nonadministrative users. I’ve seen Windows servers left logged in as an administrator for weeks at a time, with remote desktop access, because some ancient piece of vendor software required it. This particular package also was not able to run as a Windows service, so it was essentially just a Windows desktop application left running for a long time. That is not production ready!

Root-level vulnerability

Software that runs as root is automatically a target. Any vulnerability in root-level software automatically becomes a critical issue. Once an attacker has cracked the shell to get root access, the only way to be sure the server is safe is to reformat and reinstall.

To further contain vulnerabilities, each major application should have its own user. The Apache user shouldn’t have any access to the Postgres user, for example. Opening a socket on a port below 1024 is the only thing that a UNIX application might require root privilege for. Web servers often want to open port 80 by default. But a web server sitting behind a load balancer (see Load Balancing) can use any port.

Containers and least privilege

Containers provide a nice degree of isolation from each other. Instead of creating multiple application-specific users on the host operating system, we can package each application into its own container. Then the host kernel will keep the containerized applications out of each others’ filesystems. That’s helpful for reducing the containers’ level of privilege. Be careful, though. People often start with a container image that includes most of an operating system. Some containerized applications run a whole init system inside the container, allowing multiple shells and processes. At that point, the container has its own fairly large attack surface. It must be secured. Sadly, patch management tools don’t know how to deal with containers right now. As a result, a containerized application may still have operating system vulnerabilities that IT patched days or weeks ago.

Container image as perishable goods

The solution is to treat container images as perishable goods. We need an automated build process that creates new images from an upstream base and our local application code. Ideally this comes from our continuous integration pipeline. Be sure to configure timed builds for any application that isn’t still under active development, though.