API Clients - Best Practice Guide
The following best practices will help ensure you have a reliable and secure integration with any external API.
Securing your API keys properly will reduce the likelihood of your credentials being hijacked.
Key scoping - Use separate keys for production, development and testing environments.
If you have different applications, modules or programs that use the same APIs then separate those keys too.
Scoping your keys will reduce the impact of a compromised key.
Restrict key properties - Restrict keys to their smallest possible use-case. For example if a key is only needed for a
particular endpoint then only allow the key access to that endpoint, this is often referred to as the principle of least privilege (PoLP).
Keep keys out of source code - Store your keys in environment variables or in files outside of your source code, this keeps
exposure of your keys low and is particularly important if you use public code management tools like GitHub.
However if you must then be sure to follow all of the other key protection measures. Obfuscate or preferably encrypt your API key so
it's not easily extracted from your client-side code.
Often overlooked is setting an appropriate timeout value on your HTTP client. The default value for many platforms is often higher than 60
seconds which for many scenarios is far too high. You should set your HTTP and TCP timeouts to appropriate values for the given API characteristics.
Doing so will ensure the API fails quickly in the event of a network outage or other kind of connectivity issue.
If an API request does fail should you retry the request? This will largely depend on how critical the response is to you, if it's
important then retrying the requests at least one more time is usually a good idea. You will need to think about how many times you should retry and what delay
to use between retries.
This is a general approach to software design but applies well to API clients. Try to write your API implementation in such a way that your
application can continue to function even if the API fails or returns unexpected data. Consider the various states the API could possibly
respond with like various HTTP status codes and handle these gracefully. It's also a good idea to incorporate a good error reporting/alerting system too.
Finally, if your going to be making loads of API requests then connection pooling or connection reuse will be significantly more efficient than opening
up new HTTP connections for every single request. This is usually a feature of your HTTP client which you can configure and enable,
most mature HTTP client libraries provide this functionality out of the box.