Why We Love Using the jsonapi.org Methodology for Most API's
We've been using the jsonapi.org methodology for a lot of our projects during the last few years, and it's helped to iron out so many issues we experienced in our attempts to find the perfect API architecture.

It's important for the client to be in control and we've found that, especially for medium to large projects, there's really only two mainstream specifications which give control to the client at the moment; jsonapi.org and GraphQL.

The GraphQl hype seems to be growing recently, especially with the HonetPot GraphQL documentary last year. We appreciate GraphQL for its aesthetics, especially the expressive query structure, however, we've been slow to adopt it in practice because when compared to jsonapi.org, it has a lot of drawbacks.

Firstly, GraphQL (on specification) has only one endpoint where all requests, for all data, regardless of the modelling, are made. Secondly, all requests to this endpoint are expected to be POST requests, even for reading data, which would traditionally be a GET request.

Why does this matter?

Understanding the available resources

Since the endpoint URL convention in GraphQL is not representational it's generally less easy to understand all the available resources from a top-level and requires more documentation, then say just a Postman collection, or at least requires the developer to dig deeper into the query itself to understand the architecture and relationships between models. It also requires the developer to read through the documentation to see which properties can be used for filtering as opposed to, with jsonapi.org, just looking at the URL parameters in a Postman collection.

With REST the URLs are of course representational to the resource, but with jsonapi.org, the URL parameters for filtering, sparse field-sets and includes give you an idea of the relationships between resources, and the kind of fields that can be filtered by just glancing at the URL and parameters.

Caching

GraphQL expects a POST for all requests, so unlike REST it doesn't utilise default browser HTTP caching since the HTTP specification only caches GET requests.

To cache GraphQL requests at the HTTP level, you'd have to implement caching in a service worker to cache all POST requests to the remote GraphQL URL. Since the URL is always the same you'd have to hash the body of the request, which is the GraphQL query, and use it as part of the Cache Key.

Anti Microservices

Microservice architecture is one of the best ways to achieve scale in a managed way and is fast becoming the norm for large applications. The beauty of having different URLs for different endpoints in general is that you can route traffic to services via Nginx path matching, which makes architecture more scalable.

It's not as simple doing is with a single endpoint as you would need to implement a gateway 'middleman' that intercepts the and reads the query, liaising with the different microservices to build a response.

Error Handling

This is a nightmare waiting to happen, whether you are an experienced developer or brand new, robust error handling from fetch calls is one of the most important aspects to scalable client-side code, GraphQL doesn't care.

GraphQL always responds to 'successful' requests (as in it's hit the server and the server hasn't melted) with a 200 HTTP status code, but the body could contain an array of errors. Granted, you could look for errors in the body and throw an exception with the error object. But client-side you have to decipher what went wrong and then show the appropriate messages to the user.

Not relying on HTTP error codes is a big enough reason to abandon the spec.

Speed

GraphQL is fast out of the box, but when you start working with multiple services and combining data requests, performance suffers. This is a downside which hits if you're attempting to build a microservice architecture. Combine this issue with the inability to cache responses easily due to always using the POST verb, you have a real problem on your hands in production.

jsonapi.org spec solves these issues, you immediately capitalise on using routing and caching to gain speed, simply because you are using different URLs and the GET verb for reading data.

Ok, so why do companies even bother with GraphQL?

Despite its drawbacks, we're not completely naive of the issues GraphQL solves, primarily bloated responses, with REST APIs returning all the data even if it is not required and its ability to merge several resources into one request, which drops the number of requests needed to populate the client-side view.

It can be used well, but like all technologies, it has its place. If you're Facebook and have a news feed, it's a perfect utility. For most situations, it's verbose, clunky and needs much more configuration than a specification like jsonapi.org.

When dealing with projects with many services that need to scale well, we will remain RESTful, and follow the jsonapi.org architecture as it solves most of the problems with traditional REST and contrary to current trends, hasn't been overshadowed by GraphQL.

Stay in touch with the latest digital news!

We'll send you a monthly newsletter that collates all things digital; from cyber security to design trends.