onsdag, februar 26, 2014

API authentication considerations and best practices

I have been answering a few security questions on Stackoverflow and going through some APIs on programmableweb.com - and it keeps amazing me how often people gets HTTP authorization wrong.

The typical questions I have noticed on SO are something like "I want to secure my API, have control of clients and require users to login - but I don't want to use OAuth". Duh? Why not? Perhaps because OAuth is conceived as being too difficult to work with? Well, OAuth2  is definitively not difficult (OAuth1 is another story though).

But lets take a look at some of the existing practices:

API keys in URLs: one example is the Finish National Gallery API (which I somehow stumbled upon). Here you are required to obtain an API key and put it into the URL in every request made to the API:

  http://kokoelmat.fng.fi/api/v2?apikey=********&q=A+III+2172


API keys in custom headers: one example is from DocuSign. They require you to encode both your (developer) user name, password and API key in a JSON object and then send that in a custom header X-DocuSign-Authentication:

  GET /some-url
  X-DocuSign-Authentication: { "Username": "...", "Password": "...", "IntegratorKey": "..." }
  ... more ...


Signed URL/body parameters: one example is last.fm which requires you to use your API key to obtain a session token and use that token later on to sign requests, either using URL parameters or in the body of the HTTP request.

HTTP Basic authentication: GitHub supports authentication via the standard HTTP basic authentication mechanism where you supply username and password BASE64 encoded in the "Authorization" header.

OAuth1 and Oauth2: PhotoBucket uses OAuth1 and BaseCamp uses Oauth2 (in addition to HTTP basic authentication).

I did set out with the expectation of finding many more strange authorization schemes, but it turned out that at least most of the big players use OAuth1 or OAuth2. That's great! Now we just need to get the word out to all the "dark matter developers" out there (as Scott Hanselman calls them).

Things to be aware of


1) API keys in URLs are very easily accidentally exposed to third parties. If you take a copy of the URL and mail it to somebody you will end up sending your private key to someone else. For the same reason API keys may end up in log files here and there. Not good!

2) API keys in URLs changes your resource URL such that it depends on who is using it. Instead of everybody referring to /some-items/1234 it will be /some-items/1234?key=abc for one user and /some-items/1234?key=qwe for another - and these two are completely different URLs even though they "only" differ on the query parameters. It is semantically the same as encoding the key in the path segment as for instance /api/abc/some-items/1234 and /api/qwe/some-items/1234 which I don't expect any one to think of as a good idea - right?

3) API keys in headers are better than API keys in URLs since it will keep the URL stable for all users and it doesn't expose API keys when sending links to others. The problem though is that you are still sending your secret credentials (the API key) over the wire and it may be wiretapped by third parties. Another problem is that a client may accidentally connect to the wrong server (by misconfiguration or otherwise) and expose its credentials there.

4) Signed requests (done right!) should be preferred over sending the API key directly. The signature should not be part of the URL for the reasons stated above - and the same goes somehow for sending the signature in the body since different users will see different bodies which is not necessary when you have a standard HTTP header for the purpose.

5) Proprietary methods for signing requests are prone to design errors as it is easy to get the signature mechanism wrong - and thus accidentally making the signature technique easy to circumvent.

6) Proprietary methods for signing requests requires client developer to understand yet another signature mechanism. It also makes it less likely to find existing client libraries to handle the crypto stuff. Both of these issues makes client adoption of your API less likely. And you know what, dear server developer? Your client developers will call for support and it will end up on YOUR table, reducing your ability to focus on coding the next great thing! So stick to well documented standards - it will be less annoying for everybody including yourself.

7) HTTP basic authentication is great for debugging and getting started scenarios but should not be used in production. The problem is that client credentials are send in clear text and are thus susceptible to accidental exposure as mentioned before. Your API should support basic authentication as it will make it possible to explore the API using a standard web browser - but it should be disabled in production.

8) It should be possible to revoke API keys in case they are compromised in some way.

What kind of technique would solve all of these issues? Well, Oauth2 is one standard solution; it uses the HTTP "Authorization" header (thus avoiding authentication stuff in URL and body), it is a standard and used correctly it protects clients from exposing their API keys to the server and over the wire.

Another solution is to use SSL/TLS with client certificates. With the proper HTTP client libraries this can be as easy as loading a certificate (one line of code) and assigning it to the HTTP request object (second line of code). This can although not authorize a combination of both client credentials and end user credentials.

OAuth2


OAuth2 roles and terms


Before jumping into OAuth2 I better explain some of the terms used when talking about OAuth2 (mostly copied from the RFC at http://tools.ietf.org/html/rfc6749#section-1.1):

- Protected resources: the data you want to protect.

- Resource owner: An entity capable of granting access to a protected resource. The "entity" is often a person - the end user.

- Resource server: The server hosting the protected resources.

- Client: An application making protected resource requests on behalf of the resource owner. This can for instance be a mobile application, a website or a background integration process impersonating an existing end user.

- Client credentials: a pair of client ID and client secret. This could be your developer or application ID and API key.

- Resource owner password credentials: the typical user-name/password combination issued to an end user.

OAuth2 flows


OAuth2 has four different modes of operating (called flows) - going from the relatively complex 3-legged authorization flow to the very simple "client credentials" authorization flow. At least one of the flows should fit into just about any public API ecosystem out there - and if not then it is possible to add new extension flows (as for instance Google does with the JWS implementation).

At its core OAuth2 has only two high level steps:

  1) Swap a set of user and/or client credentials for an access token (authorization step), and

  2) Use the access token to access the API resources.

That's it. It can be very simple. The difficult part of OAuth2 is the many ways a client can obtain an access token.

Here I will focus on a scenario with a trusted client that accepts user credentials and acts on behalf of them. This can be very useful for many internal integration patterns where no humans are involved. I won't cover the scenario where an untrusted client (third party website) needs to access the end user's resources using the 3-legged authorization flow.

Acces tokens and bearer tokens


At a suitable high level OAuth2 only has two steps as mentioned before: 1) authorization, 2) resource access. The output of a successful authorization step is always an access token which must be used in subsequent requests to the resource server (the API).

In most cases the access token is a so called "bearer token" - a token which the bearer can present to gain access to resources. Such a token has no built-in semantics and should be considered as nothing but a plain text string.

The bearer token is included in the HTTP Authorization header like this:

     GET /resource HTTP/1.1
     Host: server.example.com
     Authorization: Bearer SOME-TOKEN


Authorizing with client credentials only


This is just about the simplest flow possible (see http://tools.ietf.org/html/rfc6749#section-4.4): all the client has to do is to send it's client credentials (ID and secret) as if it was using HTTP basic authorization. In return the client receives an access token it can use in subsequent requests for the protected resources. Here is an example from the RFC:

     POST /token HTTP/1.1
     Host: server.example.com
     Authorization: Basic czZCaGRSa3F0MzpnWDFmQmF0M2JW
     Content-Type: application/x-www-form-urlencoded

     grant_type=client_credentials


The response could be:

     HTTP/1.1 200 OK
     Content-Type: application/json;charset=UTF-8
     Cache-Control: no-store
     Pragma: no-cache

     {
       "access_token":"2YotnFZFEjr1zCsicMWpAA",
       "token_type":"bearer"
     }


This flow allows the client to act on behalf of itself but it does not include any end user information.


Authorizing with both client credentials (API key) and user credentials (password)


If a client needs to act on behalf of the end user (the resource owner) then it can use the "Resource Owner Password Credentials Grant". This flow is almost as simple is the previous flow - all the client has to do is to add the client credentials to the authorization request. Here is an example from the RFC:

     POST /token HTTP/1.1
     Host: server.example.com
     Authorization: Basic czZCaGRSa3F0MzpnWDFmQmF0M2JW
     Content-Type: application/x-www-form-urlencoded

     grant_type=password&username=johndoe&password=A3ddj3w


The response could be:

     HTTP/1.1 200 OK
     Content-Type: application/json;charset=UTF-8
     Cache-Control: no-store
     Pragma: no-cache

     {
       "access_token":"2YotnFZFEjr1zCsicMWpAA",
       "token_type":"bearer"
     }


This flow allows the client to act on behalf of the end user. The client application must be trusted as the end user will have to pass his/her credentials to it.


Protecting credentials in transit


There is one big problem with the two simple flows mentioned above; both of them pass the client and end user credentials in clear text. For this reason OAuth2 makes it mandatory to use TLS/SSL to encrypt all of the requests. Neither does these flows cover scenarios with untrusted clients (such as a third party website that wants to access an end users resources on a different server) - but that scenario is covered in the "Authorization Code Grant" scenario which I will not discuss here.

The use of TLS/SSL protects the credentials from eavesdropping and man-in-the-middle attacks but it does not protect against sending credentials to the wrong server - either by misconfiguration or because the server has been compromised.

This lack of protection is perhaps one of the biggest difference between OAuth1 and OAuth2; OAuth1 protects credentials by signing requests instead of sending the raw credentials. This requires a rather complex signing algorithm which has caused many headaches over the time. OAuth2 trades the protection of credentials for simplicity which makes it a lot easier to use - but susceptible to leaking credentials to rogue servers.

It is possible to protect credentials from this problem though but it requires clients to use some complex crypto stuff to sign requests instead of sending the raw credentials (just like OAuth1 does). This is what Google does when it requires requests to be signed using JSON Web Signatures (JWS).

If you are using .NET then my Ramone HTTP library has support for OAuth2 with JWS as can be seen in this example: http://soabits.blogspot.com/2013/03/using-ramone-for-oauth2-authorization.html.


Protecting credentials in clients


One more word of caution - websites, mobile apps, desktop apps and similar with public clients cannot protect their client credentials. Other application may be able to do it (server to server integrations for instance) but public clients, be it JavaScript, byte code or assembler, can always be downloaded, decompiled and scrutinized by various forensic tools and in the end it will be impossible to keep client secrets from being stolen. This is a fact we have to live with.

Client credentials can be useful for statistics. They also give the ability to revoke credentials in order to block rogue applications - but they cannot be trusted in general.

Further reading

Eran Hammer has a discussion about the drawbacks of OAuth2 at http://hueniverse.com/2012/07/26/oauth-2-0-and-the-road-to-hell/ and here is another discussion of the drawbacks of bearer tokens http://hueniverse.com/2010/09/29/oauth-bearer-tokens-are-a-terrible-idea/

Ingen kommentarer:

Send en kommentar