Testing authorisation scenarios in ASP.NET Core Web APIs

Dylan Morley
ASOS Tech Blog
Published in
6 min readNov 9, 2018

--

A common strategy for API authentication and authorisation is to use JWT bearer tokens on the headers of requests. The APIs being designed will often have logic that depends on the claims in the payload of the JWT for making authorisation decisions. Because of this, during the integration testing phase, your code will take a dependency on a token issuer at both ends of the communication.

  • The test code needs to acquire a token from an issuer to send in the request header
  • The API needs to retrieve meta-data from the token issuer to determine the signing keys to enable it to validate the token

This can lead to an approach that involves lots of network calls to obtain the required information, slowing down the test process and increasing the risk of failures.

In order to maintain a fast feedback cycle, while allowing us to test as much of our API surface as possible, we can use TestHost along with some custom initialisation that will allow us to isolate these dependencies and maintain a secured API.

You can download the entire source for this article from Github

Context

For the context of this article, we’re developing an ASP.NET core API that has been secured using Microsoft.AspNetCore.Authentication.JwtBearer. We’ve decided to use Azure Active Directory as the token issuer and have locked down the API using the OIDC configuration from https://login.microsoftonline.com/common/.well-known/openid-configuration

One of the security requirements that we need to implement is the ability to restrict access to particular resources to those that are defined as the owners of the resource by the claims in their token. For example, for a resource in the form https://the-api.com/customer/{customerId}/orders, to be able to access this resource then I must have an identifier claim from my token that matches {customerId}. To achieve this, we can use the policy-based authorisation feature of ASP.NET Core.

As the engineer for the API, I want to test this scenario and I’ve chosen to write tests using BDDfy and NUnit — I now need to configure TestHost to isolate the dependencies so I can test without making any network calls.

The production code

The API has secured itself using the standard Microsoft libraries and a call to AddJwtBearer. We’ve created a custom policy named SubjectMustMatchRouteParameterRequirement that states that the sub claim in the JWT must match the customerId route parameter for a resource, and have included that in the initialisation.

The controller we want to test is using an object that makes an HTTP call to an external dependency, it gets an order and returns it in the response. In this example, we also echo out the values of the ClaimsPrincipal from the User object

The test code

To enable in memory testing, we’re using the TestServer object from Microsoft.AspNetCore.TestHost, which we want to initialise with some customisations when our tests run. We want to reuse as much of the Startup class in the production code as possible, but override settings to allow us to mock out actual API calls and perform tests in memory.

For authentication using JWT, because a value has been provided for MetadataAddress as part of the call to AddJwtBearer, the authentication package is going to make 2 calls,

  1. Retrieve the OIDC configuration from the endpoint specified
  2. Parse the result of 1) and follow the link defined for jwks_uri to retrieve the public keys from the endpoint.

The key information returned from the second call is then used as part of the TokenValidationParameters during the Authentication process. We want to override this behaviour and have a static, in-memory representation of these responses.

In our Test project, we’ll therefore call UseStartup and but choose a few options to ConfigureTestServices and perform PostConfigure.

  • Using ConfigureTestServices, we provide a mocked version of the OrderRetriever class to remove our dependency on a real HTTP call
  • Using PostConfigure, we amend the settings for JwtBearerOptions to enable in memory testing
  • For TokenValidationParameters, we ignore signature validation errors and just return a token in the SignatureValidator delegate
  • For BackchannelHttpHandler, we set a custom MessageHandler named MockBackchannel
  • For MetadataAddress, we set to inmemory.microsoft.com, to make it clear this is not making an external call

As MetadataAddress is set to a non-existent endpoint, we need to implement the correct responses from MockBackchannel. This is a pretty simple process as the real endpoints are publicly available — we can download the JSON from the endpoints and store them as files in the test project, which can then be set as embedded resources and the contents returned from calls to MockBackchannel.

NB: For consistency, I’ve amended the downloaded OIDC config and set all the endpoints to follow the inmemory.microsoft.com pattern.

As the requests are made by the Authentication package, they call into MockBackchannel, where we can inspect the URI and return the relevant content.

The API can now retrieve OIDC configuration from a static set of files and makes no outbound calls — we can therefore test the API in a good known state.

Test Tokens

To communicate with secured endpoints, the test code needs to generate a token with the correct set of claims for the use-case. You’ll often find yourself needing to test a number of different scenarios that are based on a slightly different variety of claims, so we want the test project to be able to easily specify the token payload, allowing easy on boarding of new test cases as the requirements change.

A class exists using the builder pattern for this purpose (imaginatively named BearerTokenBuilder), any of the tests in the project should be able to use this so access is defined via a class that our BDDfy tests will inherit from. At initialisation time, the audience, issuer and certificate are set for the builder so that the tests will just need to define the claims they need.

Since signature validation of the JWT has been disabled in the PostConfigure code of our test project, it doesn’t matter what is used for the Signing Certificate in BearerTokenBuilder. The test project is using a self-signed certificate — another project-embedded resource, this is to demonstrate loading and using certificates in test projects.

This gives us a simple means to generate a token with any claims we like, e.g.

var token = ComponentContext
.TokenBuilder
.ForSubject(customerId)
.WithClaim("groups", "group1")
.WithClaim("groups", "group2")
.BuildToken();

BDDfy

Let’s tie it altogether by creating a test of an endpoint that is secured using the [Authorize] attribute. This attribute will ensure that the token has passed the requirements defined in TokenValidationParameters and the ClaimsPrincipal is in an authenticated state.

This gives us a simple, easy to understand GWT, with easily maintained steps. To test the custom logic we’ve defined in SubjectMustMatchRouteParameterHandler, we follow a similar pattern. For this, we’re going to write two tests to verify that:

  1. A customer with a subject claim 123456 can access a resource in the form http://the-api-host/demo/route-based/123456
  2. A customer with a subject claim 12345 cannot access a resource in the form http://the-api-host/demo/route-based/654321 and a Forbidden result is returned

In the token building method, you can see that you may configure whatever combination of claims you need to fulfill your test cases, here we’re setting the subject claim in the parameter passed to the method.

A bit more code but a very similar pattern which would extend as more test cases are required. The TokenBuilder can create a payload to verify any of the test cases that depend on Claims based authorisation.

Summary

By using TestHost, isolating all dependencies and having a consistent way of building tokens, your engineering team will be able to build a set of tests that can cover the key functionality of APIs, allowing you to build new endpoints quickly and giving you the confidence to refactor. These can be executed locally from within the IDE or the command line using dotnet test. Using embedded resources where appropriate, your test code is not dependent on any particular machine resources and is therefore portable, giving predictable results wherever it executes.

Correct organisation of the code would allow you to reuse many of the cases in later stages, executing the same HTTP requests against a deployed component as part of integration testing, promoting reuse of code and keeping maintenance to a minimum. You’d just need a way to

Your test suite has everything it needs to run and can quickly execute as part of pull request validation, ensuring a good level of the API surface is tested before any code is merged into master and before any deployments have taken place.

Overall, this reduces the feedback loop and should allow you to go faster!

About Me

I’m Dylan Morley, one of the Principal Software Engineers with ASOS. I primarily work on the back-end commerce APIs that enable our shopping experience.

--

--