Skip to content

Add built-in option for handling expiry latency

Created by: klippx

We are using oauth2 in our Api.

When we make a request we use

  class Api
    def get(*args)
      with_error_handling(args) { connection.get(*args) }
    end
  end

The implementation of connection will construct the request headers, setting 'Authorization' => "Bearer #{access_token}" where access_token is dynamically fetched:

    # Manages the access_token (login, refresh)
    #
    # @return [String]
    def access_token
      if !token_store.get
        get_new_token
      elsif token_store.get.expired?
        begin
          refresh_token
        rescue OAuth2::Error
          get_new_token
        end
      end

      token_store.get.token
    end

token_store is a Concurrent::MutexAtomicReference instance in which we store the OAuth2::AccessToken instance, and retrieve using get. You can see we rely on expired? here.

The implementation of with_error_handling checks the response status, and if it is a 401 it will try again:

    def with_error_handling(args = nil, retries = 1, &block)
      response = yield

      if response.status == 401 && retries.positive?
        return with_error_handling(args, retries - 1, &block)
      end
    end

The idea here is that we would generate a new header, invoke the call to access_token again, and this time the call to expired? would return true, and we would refresh the token. But because of latency, and how client sets expired to Time.now + expires_in, it seems to be not expired on client side but is actually expired on server side. So it will get a 401 again, since it is using the same bearer token.

We do not wish to force a token refresh in with_error_handling block in case of 401, we want the oauth2 client to recognize that the token is probably expired.