A nice specification to do partial modifications through a REST api:, this is something that always is hard do decide which is the best option.

JSON:API was the most heard specification in the conference:, it’s also the one that we chose in SeekVerify team to follow from now on.

Martin Fowler wrote about Richardson Maturity Model, so even if you are not using HATEOAS you could be doing REST. We have different levels so we can chose the one that fits better in our specific case:

Filtering data protocol:

API Versioning

It’s good or not? Should be through headers or path? we got more doubts after the conference than we had before… 

I think that the best option if you can is avoid versioning. But in the majority of the cases you can not, so at least you should kept the previous version the less time possible, and specify a clear deadline even before start developing the new version, so you won’t end up maintaining two API versions.

Header or path… Kin Lane was arguing about use path, since is much clear for developers and is automatically registered in the logs.


We definitely should use more hypermedia links. It’s for a machine what web site links are for humans.

So we can have behaviour in the API clients decoupled of specific paths.

Also it’s a kind of documentation in the same API response, so we know on specific entity witch actions can we do, or in a list which is the path to get next paginated results.

For the enter point there is JSON_HOME, an specification for that response:

So a client that doesn’t know our API structure can enter in the main path and will see all the options that has, thats pretty good.

Hypermedia Representations

  • JSON:API: (most widely used)
  • HAL: (seems to be the simpler one)
  • Siren:
  • Collection+JSON:
  • UBER:
  • ALPS:
  • Hydra:
  • JSON-LD: (used by Google)
  • Mason:

About the types used in the links, there is an specification with a lot of options:

Specifications (OpenAPI, Swagger…)

Creating first the specification it’s useful, so you have already a documentation and you can even test the api over that, and the cool thing is that all is in a single point.

Also that specifications should be in the project, close to the code.

Online Swagger editor:

Online OpenAPI editor:

Tool to convert specification files to an other format:

Tool that allow you to design the API and even create the mocks, also is saving all the files in GitHub:

A lot of times we need to create an schema to represent some model: person, address…, we could reuse an existing one from

API Clients

Develop API clients for your API has some benefits, for example you are gonna have a client point of view.

Also when is necessary to change a new version can be done easily, you just have to prepare a new version of the api client and notify you clients to update his packages.

Securing APIs

We saw some examples about how to secure our APIs.

All of them were using an API Gateway as a enter point for all the services, and that gateway is the one in charge to ask an other OAuth service to validate/translate (by reference to by value) the incoming tokens. So with that we have all the services without that code.

Two 45’ talks weren’t enough to learn exactly how to create a system like that, but definitely is something interesting to learn.

Kong Gateway:

Tyk Gateway:


Simona Cotin from Microsoft did a short introduction to serverless with NodeJS over Azure Functions:

I didn’t see before Azure Functions, just AWS Lambda, but working with VSCode and the Azure Functions plugin is pretty easy to deploy functions.

Also was funny see a Microsoft employe working with a MacBook, showing the email examples on Gmail and developing with NodeJS, how different has become Microsoft…

She shared an interesting tool to compare costs of functions in some providers: