Let’s say you have API which you want to be available for multiple consumers. For example, your clients are mobile application and web. What do you use for communication between server side and client side developers? You may write down documentation in plain text, use some spreadsheets or whatever you prefer. It’s ok while it works. But it’s good to know that there is pretty standard way for such kind of things — Open API. Previously known as Swagger. You also have other options such as enforced data schemas (gRPC, thrift) or even GraphQL. But it’s a topic for another blog post I believe.

So, we are taking Open API for now, good. Next question is how do you write and update such documentation? Obviously you can do it manually. The problem is that by definition Open API specification is one giant YAML (or JSON) file which contains all your endpoints, data types, parameters, bodies, etc. It’s quite challenging to manage such huge structured text in one file.

Image for post
Image for post

As a result there are many different attempts to make specification be autogenerated from the code. Usually they are based on special comments for HTTP endpoints in your favourite framework or something like this. Good part of this way is that documentation become embedded into your code. Bad part of it is that comments are going to take place everywhere around and make regular code reading much harder, as for me.

Alternative approach is to generate specification from tests. Huge benefit of this option is that your specification not only defines things but also checks them. Only if test has passed endpoint appears in the document.

Sadly, any autogeneration is based on basically DSL-s which is defined on top of YAML syntax we have in Open API specification. What does it mean in practice? Well, first of all you’ll need to learn both Open API and this fancy DSL in order to be able to use it and solve possible conversion problems. Usually authors of such libraries are trying to stay as close to spec as they can but it’s quite hard and may not be native for the language you are using. For example, in Ruby it’s common to use snake_cased identifiers but Open API prefers camelCase. Let the fight begin!

Also any DSL on top of another language is always one step behind of the language itself. When Swagger has become Open API it took some time to change DSL libraries and make them support new standard. Some of them, of course, has never been changed because they are not so well supported at the moment or even abandoned.

So, my personal preference is to use Open API spec directly, write and update it manually, but split into files and folders according to a structure of the final document.

Here is the example:

# defines root element with info and servers nodes
# shared components (schemas, responses, request bodies, etc)
# available paths to endpoints, one file per verb

Such folder’s layout allows you to find appropriate file pretty easily when you are editing documentation. It’s 1:1 mapped to the structure defined in Open API spec. In the same moment you keep using plain YAML files without any intermediate layers.

Nice, but how you can compile final spec out of it? By script, of course! 😉 Let’s do a little bit of magic. As you can see from the folder tree above the source code of specs goes to . Our building script will live in , target directory where assembled version will be created — . We are going to create it both YAML and JSON formats.

On the first step shown above we are reading root file of the specification — . It’s going to define our start point — hash with basic information about service.

Next step is to add shared components into the hash. In this folder path to the file reflects path to the node in final document, so we just need to iterate through all YAML files in all subfolders, read it’s content into the hash, split path to the file to create appropriate keys in and merge content in the end. For example definition from should be available as in our final hash. Here is a code:

Dealing with content inside of folder is a little bit different. Even simpler in some way. Here path to the file reflects actual path to the endpoint in our API. So, instead of building new inner keys we should keep path joined and use it as a key for a value represented by parsed content of the file itself.

Here is how it may look like:

There is one exception here though. It’s a root path. In our folder structure is represented by the directory with the name and path it should go is obviously.

Good. So, we are basically done. Let’s write our assembled specification to files on the disk:

The output will be in the directory. We ensured it exists in our folder in the beginning of the script. Now you can take it and put, for example, to Swagger Editor in order to check validity and see the preview.

The last thing I wanted to share today is a way to publish the result. In my case it’s Docker container. On the DockerHub there is an official build of Swagger UI. So, what we can do is take it as a basis and put our to the right place.

Put this stuff into the in the root directory of our documentation and build it as usual. Result is going to be served on port 8080.

Here we are. Final version of the script can be found here. Done. You are awesome!

Let's get it up and running 👌

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store