RSS

API Definitions News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is defining not just their APIs, but their schema, and other moving parts of their API operations.

What Are Your Enterprise API Capabilities?

I spend a lot of time helping enterprise organizations discover their APIs. All of the organizations I talk to have trouble knowing where all of their APIs are–even the most organized of them. Development and IT groups have just been moving too fast over the last decade to know where all of their web services, and APIs are. Resulting in large organizations not fully understanding what all of their capabilities are, even if it is something they actively operate, and may drive existing web or mobile applications.

Each individual API within the enterprise represents a single capability. The ability to accomplish a specific enterprise tasks that is valuable to the business. While each individual engineer might be aware of the capabilities present on their team, without group wide, and comprehensive API discovery across an organization, the extent of the enterprise capabilities is rarely known. If architects, business leadership, and any other stakeholder can’t browse, list, search, and quickly get access to all of the APIs that exist, the knowledge of the enterprise capabilities will not be able to be quantified or articulated as part of regular business operations.

In 2018, the capabilities of any individual API is articulated by it’s machine readable definition. Most likely OpenAPI, but could also be something like API Blueprint, RAML, or other specification. For these definitions to speak to not just the technical capabilities of each individual API, but also the business capabilities, they will have to be complete. Utilizing a higher level strategic set of tags that help label and organize each API into a meaningful set of business capabilities that best describes what each API delivers. Providing a sort of business capabilities taxonomy that can be applied to each API’s definition and used across the rest of the API lifecycle, but most importantly as part of API discovery, and the enterprise digital product catalog.

One of the first things I ask any enterprise organization I’m working with upon arriving, is “do you know where all of your APIs are?” The answer is always no. Many will have a web services or API catalog, but it almost always is out of date, and not used religiously across all groups. Even when there are OpenAPI definitions present in a catalog, they rarely contain the meta data needed to truly understand the capabilities of each API. Leaving developer and IT operations existing as black holes when it comes to enterprise capabilities, sucking up resources, but letting very little light out when it comes to what is happening on the inside. Making it very difficult for developers, architects, and business users to articulate what their enterprise capabilities are, and often times reinventing the wheel when it comes to what the enterprise delivers on the ground each day.


The Layers Of Completeness For An OpenAPI Definition

Everyone wants their OpenAPIs to be complete, but what that really means will depend on who you are, what your knowledge of OpenAPI is, as well as being driven by your motivation for having an OpenAPI in the first place. I wanted to take a crack at articulating a complete(enough) definition for OpenAPIs I create, based upon what I’m needing them to do.

Info & Base - Give the basic information I need to understand who is behind, and where I can access the API.

Paths - Provide an entry for every path that is available for an API, and should be included in this definition.

Parameters - Provide a complete list of all path, query, and header parameters that can be used as part of an API. https://gist.github.com/kinlane/29d0247d6ff4aaa39db4dc793df4a2f9

Descriptions - Flesh out descriptions for all the path and parameter descriptions, helping describe an API does.

Enums - Publish a list of all the enumerated values that are possible for each parameter used as part of an API. https://gist.github.com/kinlane/444731f0214cab5efcc3ae77011823ba

Definitions - Document the underlying schema being returned by creating a JSON schema definition for the API.

Responses - Associate the definition for the API with the path using a response reference, connecting the dots regarding what will be returned.

Tags - Tag each path with a meaningful set of tags, describing what resources are available in short, concise terms and phrases.

Contacts - Provide contact information for whoever can answer questions about an API, and provide a URL to any support resources.

Create Security Definitions - Define the security for accessing the API, providing details on how each API request will be authenticated.

Apply Security Definitions - Apply the security definition to each individual path, associating common security definitions across all paths.

Complete(enough) - That should give us a complete (enough) API description.

Obviously there is more we can do to make an OpenAPI even more complete and precise as a business contract, hopefully speaking to both developers and business people. Having OpenAPI definitions are important, and having them be up to date, complete (enough), and useful is even more important. OpenAPIs provide much more than documentation for an API. They provide all the technical details an API consumer will need to successfully work with an API.

While there are obvious payoffs for having an OpenAPI, like being able to publish documentation, and generate code libraries. There are many other uses for an OpenAPI like loading into Postman, Stoplight, and many other API services and tooling that helps developers understand what an API does, and reduce friction when they integrate, and have to maintain their applications. Having an OpenAPI available is becoming a default mode of operation, and something every API provider should have.


A Quick Manual Way To Create An OpenAPI From A GET API Request

I have numerous tools that help me create OpenAPIs from the APIs I stumble across each day. Ideally I’m crawling, scraping, harvesting, and auto-generating OpenAPIs, but inevitably the process gets a little manual. To help reduce friction in these manual processes, I try to have a variety of services, tools, and scripts I can use to make my life easier, when it comes to create a machine readable definition of an API I am using.

One way I’ll create an OpenAPI from a simple GET API request, providing me with a machine readable definition of the surface area of that API, is using Postman. When you have the URL copied onto your clipboard, open up your Postman, and paste the URL with all the query parameters present.

You’ll have to save your API request, and add it to a collection, but then you can choose to share the collection, and retrieve the URL to this specific requests Postman Collection.

This gives you a machine readable definition of the surface area of this particular API, defining the host, baseURL, path, and parameters, but it doesn’t give me more detail about the underlying schema being returned. To begin crafting the schema for the underlying definition of the API, and connect it to the response for my API definition, I’ll need an OpenAPI–which I can create from my Postman Collection using API Transformer from APIMATIC.

After pasting the URL for the Postman Collection into the API transformer form, you can generate an OpenAPI from the definition. Now you have an OpenAPI, except it is missing the underlying schema, which I will just grab the response from my last request, and convert it into JSON schema using JSONSchema.net. I’ll just grab the properties section of these, as the bottom definitions portion of the OpenAPI specification is just JSON Schema.

I can merge my JSON schema with my OpenAPI, adding it to the definition collection at the bottom. With a little more love, adding a more coherent title, description, and fluffing up some of the summaries, descriptions, tags, etc., I now have a fairly robust profile of this particular API. Ideally, this is something the API provider would do, but in the absence of an OpenAPI or Postman Collection, this is a pretty quick and dirty way to produce an OpenAPI and Postman Collection from a simple GET API, but the formula works for other types of API requests as well–leaving me with a machine readable definition for an API I will be integrating with.

There are definitely other ways of scraping API documentation, processing .HAR files generated from a proxy, but I think this is a way that anyone, even a non-developer can accomplish. I did my version in JSON, but the same process will work for YAML, making the resulting definition a little more human readable, while still maintaining it’s machine readability. I like documenting these little processes so that my readers can put to use, but it also provides a nice definition that I can use to remember how I get things done–my memory isn’t what it used to be.


Understanding The Event-Driven API Infrastructure Opportunity That Exists Across The API Landscape

I am at the Kong Summit in San Francisco all day tomorrow. I’m going to be speaking about research into the event-driven architectural layers I’ve been mapping out across the API space. Looking for the opportunity to augment existing APIs with push technology like webhooks, and streaming technology like SSE, as well as pipe data in an out of Kafka, fill data lakes, and train machine learning models. I’ll be sharing what I’m finding from some of the more mature API providers when it comes to their investment in event-driven infrastructure, focusing in on Twilio, SendGrid, Stripe, Slack, and GitHub.

As I am profiling APIs for inclusion in my API Stack research, and in the API Gallery, I create an APIs.json, OpenAPI, Postman Collection(s), and sometimes an AsyncAPI definition for each API. All of my API catalogs, and API discovery collections use APIs.json + OpenAPI by default. One of the things I profile in each of my APIs.json, is the usage of webhooks as part of API operations. You can see collections of them that I’ve published to the API Gallery, aggregating many different approaches in what I consider to be the 101 of event-driven architecture, built on top of existing request and response HTTP API infrastructure. Allowing me to better understand how people are doing webhooks, and beginning to sketch out plans for a more event-driven approach to delivering resources, and managing activity on any platform that is scaling.

While studying APIs at this level you begin to see patterns across how providers are doing what they are doing, even amidst a lack of standards for things like webhooks. API providers emulate each other, it is how much of the API space has evolved in the last decade. You see patterns like how leading API providers are defining their event types. Naming, describing, and allowing API consumers to subscribe to a variety of events, and receive webhook pings or pushes of data, as well as other types of notifications. Helping establish a vocabulary for defining the most meaningful events that are occurring across an API platform, and then providing an even-driven framework for subscribing to push data out when something occurs, as well as sustained API connections in the form of server-sent event (SSE), HTTP long polling, and other long running HTTP connections.

As I said, webhooks is the 101 of event-driven technology, and once API providers evolve in their journey you begin to see investment in the 201 level solutions like SSE, WebSub, and more formal approaches to delivering resources as real time streams and publish / subscribe solutions. Then you see platforms begin to mature and evolve into other 301 and beyond courses, with AMQP, Kafka, and often times other Apache Projects. Sure, some API providers begin their journey here, but for many API providers, they are having to ease into the world of event-driven architecture, getting their feet wet with managing their request and response API infrastructure, and slowly evolving with webhooks. Then as API operations harden, mature, and become more easily managed, API providers can confidently begin evolving into using more sophisticated approaches to delivering data where it needs to be, when it is needed.

From what I’ve gathered, the more mature API providers, who are further along in their API journey have invested in some key areas, which has allowed them to continue investing in some other key ways:

  • Defined Resources - These API providers have their APIs well defined, with master planned designs for their suite of services, possessing machine readable definitions like OpenAPI, Postman Collections, and AsyncAPI.
  • Request / Response - Who have fined tuned their approach to delivering their HTTP based request and response structure, along with their infrastructure being so well defined.
  • Known Event Types - Which has resulted in having a handle on what is changing, and what the most important events are for API providers, as well as API consumers.
  • Push Technology - Having begun investing in webhooks, and other push technology to make sure their API infrastructure is a two-way street, and they can easily push data out based upon any event.
  • Query Language - Understanding the value of investment in a coherent querying strategy for their infrastructure that can work seamlessly with the defining, triggering, and overall management of event driven infrastructure.
  • Stream Technology - Having a solid understanding of what data changes most frequently, as well as the topics people are most interested, and augmenting push technology with streaming subscriptions that consumers can tap into.

At this point in most API providers journey, they are successfully operating a full suite of event-driven solutions that can be tapped internally, and externally with partners, and other 3rd party developers. They probably are already investing in Kafka, and other Apache projects, an getting more sophisticated with their event-driven API orchestration. Request and response API infrastructure is well documented with OpenAPI, and groups are looking at event-driven specifications like AsyncAPI to continue to ensure all resources, messages, events, topics, and other moving parts are well defined.

I’ll be showcasing the event-driven approaches of Twilio, SendGrid, Stripe, Slack, and GitHub at the Kong Summit tomorrow. I’ll also be looking at streaming approaches by Twitter, Slack, SalesForce, and Xignite. Which is just the tip of the event-driven API architecture opportunity I’m seeing across the existing API landscape. After mapping out several hundred API providers, and over 30K API paths using OpenAPI, and then augmenting and extending what is possible using AsyncAPI, you begin to see the event-driven opportunity that already exists out there. When you look at how API pioneers are investing in their event-driven approaches, it is easy to get a glimpse at what all API providers will be doing in 3-5 years, once they are further along in their API journey, and have continued to mature their approach to moving their valuable bits an bytes around using the web.


Please Refer The Engineer From Your API Team To This Story

I reach out to API providers on a regular basis, asking them if they have an OpenAPI or Postman Collection available behind the scenes. I am adding these machine readable API definitions to my index of APIs that I monitor, while also publishing them out to my API Stack research, the API Gallery, APIs.io, work to get them published in the Postman Network, and syndicated as part of my wider work as an OpenAPI member. However, even beyond my own personal needs for API providers to have a machine readable definition of their API, and helping them get more syndication and exposure for their API, having an definition present significantly reduces friction when on-boarding with their APIs at almost every stop along a developer’s API integration journey.

One of the API providers I reached out to recently responded with this, “I spoke with one of our engineers and he asked me to refer you to https://developer.[company].com/”. Ok. First, I spend over 30 minutes there just the other day. Learning about what you do, reading through documentation, and thinking about what was possible–which I referenced in my email. At this point I’m guessing that the engineer in question doesn’t know what an OpenAPI or Postman Collection is, they do not understand the impact these specifications are having on the wider API ecosystem, and lastly, I’m guessing they don’t have any idea who I am(ego taking control). All of which provides me with the signals I need to make an assessment of where any API is in their overall journey. Demonstrating to me that they have a long ways to go when it comes to understanding the wider API landscape in which they are operating in, and they are too busy to really come out of their engineering box and help their API consumers truly be successful in integrating with their platform.

I see this a lot. It isn’t that I expect everyone to understand what OpenAPI and Postman Collections are, or even know who I am. However, I do expect people doing APIs to come out of their boxes a little bit, and be willing to maybe Google a topic before responding to question, or maybe Google the name of the person they are responding to. I don’t use a gmail.com address to communicate, I am using apievangelist.com, and if you are using a solution like Clearbit, or other business intelligence solution, you should always be retrieving some basic details about who you are communicating with, before you ever respond. That is, you do all of this kind of stuff if you are truly serious about operating your API, helping your API consumers be more successful, and taking the time to provide them with the resources they need along the way–things like an OpenAPI, or Postman Collections.

Ok, so why was this response so inadequate?

  • No API Team Present - It shows me that your company doesn’t have any humans their to support the humans that will be using your API. My email went from general support, to a backend engineer who doesn’t care about who I am, and about what I need. This is a sign of what the future will hold if I actually bake their API into my applications–I don’t need my questions lost between support and engineering, with no dedicated API team to talk to.
  • No Business Intelligence - It shows me that your company has put zero thought into the API business model, on-boarding, and support process. Which means you do not have a feedback loop established for your platform, and your API will always be deficient of the nutrients it needs to grow. Always make sure you conduct a lookup based upon on the domain, or Twitter handle or your consumers to get the context you need to understand who you are talking to.
  • Stuck In Your Bubble - You aren’t aware of the wider API community, and the impact OpenAPI, and Postman are having on the on-boarding, documentation, and other stops along the API lifecycle. Which means you probably aren’t going to keep your platform evolving with where things are headed.

Ok, so why should you have an OpenAPI and Postman Collection?

  • Reduce Onboarding Friction - As a developer I won’t always have the time to spend absorbing your documentation. Let me import your OpenAPI or Postman Collection into my client tooling of choice, register for a key and begin making API calls in seconds, or minutes. Make learning about your API a hands on experience, something I’m not going to get from your static documentation.
  • Interactive API Documentation - Having a machine readable definition for your API allows you to easily keep your documentation up to date, and make it a more interactive experience. Rather than just reading your API documentation, I should be able to make calls, see responses, errors, and other elements I will need to truly understand what you do. There are plenty of open source interactive API documentation solutions that are driven by OpenAPI and Postman, but you’d know this if you were aware of the wider landscape.
  • Generate SDKs, and Other Code - Please do not make me hand code the integration with each of your API endpoints, crafting each request and response manually. Allow me to autogenerate the most mundane aspects of integration, allowing OpenAPI and Postman Collection to act as the integration contract.
  • Discovery - Please don’t expect your potential consumers to always know about your company, and regularly return to your developer.[company].com portal. Please make your APIs portable so that they can be published in any directory, catalog, gallery, marketplace, and platform that I’m already using, and frequent as part of my daily activities. If you are in my Postman Client, I’m more likely to remember that you exist in my busy world.

These are just a few of the basics of why this type of response to my question was inadequate, and why you’d want to have OpenAPI and Postman Collections available. My experience on-boarding will be similar to that of other developers, it just happens that the application I’m developing are out of the normal range of web and mobile applications you have probably been thinking about when publishing your API. But this is why we do APIs, to reach the long tail users, and encourage innovate around our platforms. I just stepped up and gave 30 minutes of my time (now 60 minutes with this story) to learning about your platform, and pointing me to your developer.[company].com page was all you could muster in return?

Just like other developers will, if I can’t onboard with your API without friction, and I can’t tell if there is anyone home, and willing to give me the time of day when I have questions, I’m going to move on. There are other platforms that will accommodate me. The other downside of your response, and me moving on to another platform, is that now I’m not going to write about your API on my blog. Oh well? After eight years of blogging on APIs, and getting 5-10K page views per day, I can write about a topic or industry, and usually dominate the SEO landscape for that API search term(s) (ego still has control). But…I am moving on, no story to be told here. The best part of my job is there are always stories to be told somewhere else, and I get to just move on, and avoid the friction wherever possible when learning how to put APIs to work.

I just needed this single link to provide in response to my email response, before I moved on!


Provide Your API Developers With A Forkable Example of API Documentation In Action

I responded about how teams should be documenting their APIs when they have both legacy and new APIs the other day. I wanted to keep the conversation thread going with an example of one possible API documentation implementation. The best way to deliver API documentation guidance in any organization is to provide a forkable, downloadable example of whatever you are talking about. To help illustrate what I am talking about, I wanted to take one documentation solution, and publish it as a GitHub repository.

I chose to go with a simple OpenAPI 3.0 defined API contract, driving a Swagger UI driven API documentation, hosted using GitHub Pages, and managed as a GitHub repository. In my story about how teams should be documenting their APIs, I provided several API definition formations, and API documentation options–for this walk-through I wanted to narrow it down to a single combination, providing the minimum(alist) viable options possible using OpenAPI 3.0 and SwaggerUI. Of course, any federal agency implementing such a solution should wrap the documentation with their own branding, similar to the City Pairs API prototype out of GSA, which originated over at CFPB.

I used the VA Facilities API definition from the developer.va.gov portal for this sample. Mostly because it was ready to go, and relevant to the VA efforts, but also because it was using OpenAPI 3.0–I think it is worth making sure all API documentation moving forward supports is supporting the latest version of OpenAPI. The API documentation is here, the OpenAPI definition is here, and the Github repository is here, showing what is possible. There are plenty of other things I’d like to see in a baseline API documentation template, but this provides a good first draft for a true minimum viable definition.

The goal with this project is to provide a basic seed that any team could use. Next, I will add in some other building blocks, and implementation a ReDoc, DapperDox, or WSDLDoc version. Providing four separate documentation examples that developers can fork and use to document the APIs they are working on. In my opinion, one or more API documentation templates like this should be available for teams to fork or download and implement within any organization. All API governance guidance like this should have the text describing the policy, as well as one or many examples of the policy being delivered. Hopefully this projects shows an example of this in action.


May Contain Nuts: The Case for API Labeling by Erik Wilde (@dret), API Academy (@apiacademy)

We are getting closer to the 9th edition of APIStrat happening in Nashville, TN this September 24th through 26th. The schedule for the conference is up, along with the first lineup of keynote speakers, and my drumbeat of stories about the event continues here on the blog. Next up in our session lineup is May Contain Nuts: The Case for API Labeling by Erik Wilde (@dret), API Academy (@apiacademy) on September 25th.

Here is Erik’s background set the stage for his session:

Erik is a frequent speaker at both industry and academia events. In his current role at the API Academy, his work revolves around API strategy, design, and management, and how to help organizations with their digital transformation. Based on his extensive background in Web architecture and technologies, Erik combines deep expertise in protocols and representations with insights into API practices at today’s organizations.

Before joining API Academy and working in the API space full-time, Erik spent time at Siemens and EMC, in both cases working at ways how APIs could be used for their internal service ecosystems, as well as for better ways for customers to use services and products. Before that, Erik spent most of his life in academia, working at UC Berkeley and ETH Zürich. Erik received his Ph.D. in computer science from ETH Zürich, and his diploma in computer science from TU Berlin.

Erik nows his stuff, and can be found on the road with the CA API Academy, making this stop in Nashville, TN a pretty special opportunity. You can register for the event here, and there are still sponsorship opportunities available. Don’t miss out on APIStrat this year–it is going to be a good time in Nashville as we continue the conversation we started back in 2012 with the initial edition of the API industry event in New York City.

I am looking forward to seeing you all in Nashville next month!


How Should Teams Be Documenting Their APIs When You Have Both Legacy And New APIs?

I’m continuing my work to help the Department of Veterans Affairs (VA) move forward their API strategy. One area I’m happy to help the federal agency with, is just being available to answer questions, which I also find make for great stories here on the blog–helping other federal agencies also learn along the way. One question I got from the agency recently, is regarding how the teams should be documenting their APIs, taking into consideration that many of them are supporting legacy services like SOAP.

From my vantage point, minimum viable API documentation should always include a machine readable definition, and some autogenerated documentation within a portal at a known location. If it is a SOAP service, WSDL is the format. If it is REST, OpenAPI (fka Swagger) is the format. If its XML RPC, you can bend OpenAPI to work. If it is GraphQL, it should come with its own definitions. All of these machine readable definitions should exist within a known location, and used as the central definition for the documentation user interface. Documentation should not be hand generated anymore with the wealth of open source API documentation available.

Each service should have its own GitHub/BitBucket/GitLab repository with the following:

  • README - Providing a concise title and description for the service, as well as links to all documentation, definitions, and other resources.
  • Definitions - Machine readable API definitions for the APIs underlying schema, and the surface area of the API.
  • Documentation - Autogenerated documentation for the API, driven by its machine readable definition.

Depending on the type of API being deployed and managed, there should be one or more of these definition formats in place:

  • Web Services Description Language (WSDL) - The XML-based interface definition used for describing the functionality offered by the service.
  • OpenAPI - The YAML or JSON based OpenAPI specification format managed by the OpenAPI Initiative as part of the Linux Foundation.
  • JSON Schema - The vocabulary that allows for the annotation and validation of the schema for the service being offered–it is part of OpenAPI specification as well.
  • Postman Collections - JSON based specification format created and maintained by the Postman client and development environment.
  • API Blueprint - The markdown based API specification format created and maintained by the Apiary API design environment, now owned by Oracle.
  • RAML - The YAML based API specification format created and maintained by Mulesoft.

Ideally, OpenAPI / JSON Schema is established as the primary format for defining the contract for each API, but teams should also be able to stick with what they were given (legacy), and run with the tools they’ve already purchased (RAML & API Blueprint), and convert between specifications using API Transformer.

API documentation should be published to it’s GitHub/GitLab/BitBucket repository, and hosted using one of the service static project site solutions with one of the following open source documentation:

  • Swagger UI - Open source API documentation driven by OpenAPI.
  • ReDoc - Open source API documentation driven by OpenAPI.
  • RAML - Open source API documentation driven by RAML.
  • DapperDox - DapperDox is Open-Source, and provides rich, out-of-the-box, rendering of your OpenAPI specifications, seamlessly combined with your GitHub flavoured Markdown documentation, guides and diagrams.

There are other open source solutions available for auto-generating API documentation using the core API’s definition, but these represent the leading solutions out there. Depending on the solution being used to deploy or manage an API, there might be built-in, ready to go options for deploying documentation based upon the OpenAPI, WSDL, RAML or other using AWS API Gateway, Mulesoft, or other existing vendor solution already in place to support API operations.

Even with all this effort, a repository, with a machine readable API definition, and autogenerated documentation still doesn’t provide enough of a baseline for API teams to follow. Each API documentation should possess the following within those building blocks:

  • Title and Description - Provide the concise description of what an API does from the README, and make sure it is based into the APIs definition.
  • Base URL - Have the base URL, or variable representation for a base URL present in API definitions.
  • Base Path - Provide any base path that is constant across paths available for any single API.
  • Content Types - List what content types an API accepts and returns as part of its operations.
  • Paths - List all available paths for an API, with summary and descriptions, making sure the entire surface area of an API is documented.
  • Parameters - Provide details on the header, path, and query parameters used for API path being documented.
  • Body - Provide details on the schema for the body of each API path that accepts a body as part of its operations.
  • Responses - Provide HTTP status code and reference to the schema being returned for each path.
  • Examples - Provide example requests and response for each API path being documented.
  • Schema - Document all schema being used as part of requests and responses for all APIs paths being documented.

If EVERY API possesses its own repository, and README to get going, guiding all API consumers to complete, up to date, and informative documentation that is auto-generated, a significant amount of friction during the on-boarding process can be eliminated. Additionally, friction at the time of hand-off for any service from on team to another, or one vendor to another, will be significantly reduced–with all relevant documentation available within the project’s repository.

API documentation delivered in this way provides a single known location for any human to go when putting an API to work. It also provides a single known location to find a machine readable definition that can be used to on-board using an API client like Postman, PAW, or Insomnia. The API definition provides the contract for the API documentation, but it also provides what is needed across other stops along the API lifecycle, like monitoring, testing, SDK generation, security, and client integration–reducing the friction across many stops along the API journey.

This should provide a baseline for API documentation across teams. No matter how big or small the API, or how new or old the API is. Each API should have API documentation available in a consistent, and usable way. Providing a human and programmatic way for understanding what an API does, that can be use to on-board and maintain integrations with each application. The days of PDF and static API documentation are over, and the baseline for each APIs documentation always involves having a machine readable contract as the core, and managing the documentation as part of the pipeline used to deploy and manage the rest of the API lifecycle.


The Importance Of Postman API Environment Files

I’m a big fan of Postman, and the power of their development environment, as well as their Postman Collection format. I think their approach to not just integrating with APIs, but also enabling the development and delivery of APIs has shifted the conversation around APIs in the last couple of years–not too many API service providers accomplish this in my experience. There are several dimensions to what Postman does that I think are pushing the API conversation forward, but one that has been capturing my attention lately are Postman Environment Files.

Using Postman, you can manage many different environments used for working with APIs, and if you are a pro or enterprise customer, you can export a file that represents an environment, making each of these API definitions more portable and collaborative. Managing the variety of environments for the hundreds of APIs I use is one of the biggest pain points I have. Postman has significantly helped me get a handle on the tokens and keys I use across the internal, as well as partner and public APIs that I depend on each day to operate API Evangelist.

Postman environments allows me to define environments within the Postman application, and then share them as part of the pro / enterprise team experience. You can also manage your environments through the Postman API, if you need to more deeply integrate with your operations. The Postman Environment File makes all of this portable, sharable, and used across environments. It is one of the reasons that makes Postman Collections more valuable to some users, in specific contexts, because it has that run time aspect to what it does. Postman let’s you communicate effectively around the APIs you are deploying and integrating with, and solves relevant pain points like API environment management, that can stand in the way of integration.

There aren’t many features of API service providers I get very excited about, but the potential of Postman as an environment management solution is significant. If Postman is able to establish itself as the broker of credentials at the API environment level, it will give them a significant advantage of other service providers. With the size of their developer base, having visibility at the environment level puts their finger on the pulse of what is going on in the API economy, from both an API provider and consumer perspective. With Postman Environment Files acting as a sort of key, or currency, that has to exist before any API transaction can be executed. And, as the number of APIs we depend on increases, the importance of having a strategy (and solution) for managing our environment will grow exponentially–putting Postman in a pretty sweet position.


Any Way You Want It: Extending Swagger UI for Fun and Profit by Kyle Shockey (@kyshoc) of SmartBear Software (@SmartBear) At @APIStrat In Nashville

We are getting closer to the 9th edition of APIStrat happening in Nashville, TN this September 24th through 26th. The schedule for the conference is up, along with the first lineup of keynote speakers, and my drumbeat of stories about the event continues here on the blog. Next up in our session lineup is “Any Way You Want It: Extending Swagger UI for Fun and Profit” by Kyle Shockey (@kyshoc) of SmartBear Software (@SmartBear) on September 25th.

Here is Kyle’s abstract for the session:

Your APIs are tailored to your needs - shouldn’t your tools be as well? In this talk, we’ll explore how Swagger UI 3 makes it easier than ever to create custom functionality, and common use cases for the power that the UI’s plugin system provides.

Learn how to:

- Create plugins that extend existing features and define new functionality - Integrate Swagger UI seamlessly by defining a custom layout - Package and share plugins that can be reused by the community (or your organization)

Swagger UI has changed the conversation around how we document our APIs, and being able to extend the interface is an important part of keeping the API documentation conversation evolving, and APIStrat is where this type of discussion is happening. You can register for the event here, and there are still sponsorship opportunities available. Don’t miss out on APIStrat this year–it is going to be a good time in Nashville as we continue the conversation we started back in 2012 with the initial edition of the API industry event in New York City.

I am looking forward to seeing you all in Nashville next month!


Searching For APIs That Possess Relevant Company Information

I’m evolving the search for the Streamdata.io API Gallery I’ve been working on lately. I’m looking to move the basic keywords search that searches the API name and description, as well as the API path, summary, and description using a key word or phrase, to also be about searching parameters in a meaningful way. Each of the APIs in the Streamdata.io API have an OpenAPI definition. It is how I render each of the individual API paths using Jekyll and Github Pages. These parameters give me another dimension of data in which I can index, and use as a facet in my API gallery search.

I am developing different sets of vocabulary to help me search against the parameters used across APIs, with one of them being focused on company related information. I’m trying to find APIs that provide the ability to add, update, and search against company related data, content, and execute algorithms that help make sense of company resources. There is no perfect way to search for API parameters that touch on company resources, but right now I’m looking for a handful of fields: company, organization, business, enterprise, agency, ticker, corporate, and employer. Returning APIs that have a parameter with any of those words in the path or summary, and weighting differently if it is in the description or tags for each API path.

Next, I’m also tagging each API path that has a URL field, because this will allow me to connect the dot to a company, organization, or other entity via the domain. This is all I’m trying to do, is connect the dots using the parameter structure of an API. I find that there is an important story being told at the API design layer, and API search and discovery is how we are going to bring this story out. Connecting the dots at the corporate level is just one of many interesting stories out there, just waiting to be told. Pushing forward the conversation around how we understand the corporate digital landscape, and what resources they have available.

You can do a basic API search at the bottom of the Streamdata.io API Gallery main page. I do not have my parameter search available publicly yet. I want to spend more time refining my vocabularies, and also look at searching the request and response bodies for each path–I’m guessing this won’t be as straightforward, as parameters has been. Right now I’m immersed in understanding the words we use to design our APIs, and craft our API documentation. It is fascinating to see how people describe their resources, and how they think (or don’t think) about making these resources available to other people. OpenAPI definitions provide a fascinating way to look at how APIs are opening up access to company information, establishing the digital vocabulary for how we exchange data and content, and apply algorithms to help us better understand the business world around us.


It Is Hard To Go API Define First

Last year I started saying API define first, instead of API design first. In response to many of the conversations out there about designing, then mocking, and eventually deploying your APIs into a production environment. I agree that you should design and iterate before writing code, but I feel like we should be defining our APIs even before we get to the API design phase. Without the proper definitions on the table, our design phase is going to be a lot more deficient in standards, common patterns, goals, and objectives, making it important to invest some energy in defining what is happening first–then iterate on the API definitions throughout the API lifecycle, not just design.

I prefer to have a handful of API definitions drafted, before I move onto to the API design phase:

  • Title - A simple, concise title for my API.
  • Description - A simple, concise description for my API.
  • JSON Schema - A set of JSON schema for my APIs
  • OpenAPI - An OpenAPI for the surface area of my API.
  • Assertions - A list of what my API should be delivering.
  • Standards - What standards are being applied with this API.
  • Patterns - What common web patterns will be used with this API.
  • Goals - What are the goals for this particular API.

I like having all of this in a GitHub repository before I get to work, actually designing my APIs. It provides me with the base set of definitions I need to go to be as effective as I can in my API design phase. Of course, each of these definitions will be iterated, added to, and evolved as part of the API design phase, and beyond. The goal is to just get a base set of building blocks on the workbench, properly setting the tone for what my API will be doing. Grounding my API work early on in the API lifecycle, in a consistent way that I can apply across many different APIs.

The problem with all of this, is that it is easier said than done. I still like to hand code my APIs. It is something I’ve been doing for 20 years, and it is a habit that is hard to kick. When designing an API, often times I do not know what is possible, and I need to hack on the solution for a while. I need to hack on and massage some data, content, or push forward my algorithm a little. All of this has to happen before I can articulate the interface will look like. Sure, I might have some basic RESTful notions about what API paths will be, and the schema I’ve gathered will drive some of the conversation, but I still need to hack together a little goodness, before I can design.

This is ok. With some APIs I will be able to define and then design without ever touching any code. While others I will still have to prototype at least a function to prove the concept behind the API. Once I have the proof of concept, then I can start crafting a sensible interface using OpenAPI, then mock, and work with the concept a little more within an API design phase. Ultimately, I do not think there is any RIGHT WAY to develop an API. I think there are healthier, and less healthier ways. I think there are more hardened, and proven ways, but I also think there should be experimental, and even legacy ways of doing things. My goal is to always make sure the process is as sensible and pragmatic as it can be, while meeting the immediate, and long term business goals of my company, as well as my partners.


Describing Your API with OpenAPI 3.0 by Anthony Eden (@aeden), DNSimple (@dnsimple) At @APIStrat In Nashville

We are getting closer to the 9th edition of APIStrat happening in Nashville, TN this September 24th through 26th. The schedule for the conference is up, along with the first lineup of keynote speakers, and my drumbeat of stories about the event continues here on the blog. Next up in our session lineup is “Describing Your API with OpenAPI 3.0” by Anthony Eden (@aeden), DNSimple (@dnsimple) on September 25th.

Here is Christian’s abstract for the session:

For the last 10 years, DNSimple has operated a comprehensive web API for buying, connecting, and operating domain names. After hearing about OpenAPI at APIStrat 2017, we decided to describe the DNSimple API using the OpenAPI v3 specification - this is the story of why we did it, how we did it, and where we are today.

By the end of this presentation you will have the tools you’ll need to evaluate your own API and decide if implementing OpenAPI makes sense for you, and if so, how you can get started. You’ll have a better understanding of the tools available to you to help write your OpenAPI 3 definition, as well the basics on how to write your own definition for your APIs.

We are all still working to make the switch from OpenAPI 2.0 to 3.0, and with APIStrat being owned and operated by the OpenAPI Initiative, it will definitely be the place to have face to face discussions that influence the road map for the API specification. You can register for the event here, and there are still sponsorship opportunities available. Don’t miss out on APIStrat this year–it is going to be a good time in Nashville as we continue the conversation we started back in 2012 with the initial edition of the API industry event in New York City.

I am looking forward to seeing you all in Nashville next month!


Working With My OpenAPI Definitions In An API Editor Helps Stabilize Them

I’m deploying three new APIs right now, using a new experimental serverless approach I’m evolving. One is a location API, another providing API access to companies, and the third involves working with patents. I will be evolving these three simple web APIs to meet the specific needs of some applications I’m building, but then I will also be selling retail and wholesale access to each API once they’ve matured enough. With all three APIs of these APIs, I began with a simple JSON schema from the data source, which I used to generate three rough OpenAPI definitions that will acts the contract seed for my three services.

Once I had three separate OpenAPI contracts for the services I was delivering, I wanted to spend some time hand designing each of the APIs before I imported into AWS API Gateway, generating Lambda functions, loading in Postman, and used to support other stops along the API lifecycle. I still use a localized version of Swagger Editor for my OpenAPI design space, but I’m working to migrate to OpenAPI-GUI as soon as I can. I still very much enjoy the side by side design experience in Swagger Editor, but I want to push forward the GUI side of the conversation, while still retaining quick access to the RAW OpenAPI for editing.

One of the reasons why I still use Swagger Editor is because of the schema validation it does behind the scenes. Which is one of the reasons I need to learn more about Speccy, as it is going to help me decouple validation from my editor, and all me to use it as part of my wider governance strategy, not just at design time. However, for now I am highly dependent on my OpenAPI editor helping me standardize and stabilize my OpenAPI definitions, before I use them along other stops along the API lifecycle. These three APIs I’m developing are going straight to deployment, because they are simple datasets, where I’m the only consumer (for now), but I still need to make sure my API contract is solid before I move to other stops along the API lifecycle.

Right now, loading up an OpenAPI in Swagger Editor is the best sanity check I have. Not just making sure everything validates, but also making sure it is all coherent, and renders into something that will make sense to anyone reviewing the contract. Once I’ve spend some time polishing the rough corners of an OpenAPI, adding summary, descriptions, tags, and other detail, I feel like I can begin using to generate mocks, deploy in a gateway, and begin managing the access to each API, as well as the documentation, testing, monitoring, and other stops using the OpenAPI contract. Making this manual stop in the evolution of my APIs a pretty critical one for helping me stabilize each API’s definition before I move on. Eventually, I’d like to automate the validation and governance of my APIs at scale, but for now I’m happy just getting a handle on it as part of this API design stop along my life cycle.


We Need Your Help Moving The AsyncAPI Specification Forward

We need your help moving the AsyncAPI specification forward. Ok, first, what is the AsyncAPI specification? “The AsyncAPI Specification is a project used to describe and document Asynchronous APIs. The AsyncAPI Specification defines a set of files required to describe such an API. These files can then be used to create utilities, such as documentation, integration and/or testing tools.” AsyncAPI is a sister specification to OpenAPI, but instead of describing the request and response HTTP API landscape, AsyncAPI is describing the message, topic, event, and streaming API landscape across the HTTP and TCP landscape. It is how we are going to continue to ensure there is machine readable descriptions of this portion of the API landscape, for use in tooling and services.

My friend Fran Mendez (@fmvilas) is the creator and maintainer of the specification, and he is doing way too much of the work on this important specification and he needs our help. Here is Fran’s request for our help to contribute:

AsyncAPI is an open source project that’s currently maintained by me, with no company or funds behind. More and more companies are using AsyncAPI and the work needed is becoming too much work for a single person working in his spare time. E.g., for each release of the specification, tooling and documentation should be updated. One could argue that I should be dedicating full time to the project, but it’s in this point where it’s too much for spare time and very little to get enough money to live. I want to keep everything for free, because I firmly believe that engineering must be democratized. Also, don’t get me wrong, this is not a complaint. I’m going to continue running the project either with or without contributors, because I love it. This is just a call-out to you, the AsyncAPI lover. I’d be very grateful if you could lend a hand, or even raise your hand and become a co-maintainer. Up to you 😊

On the other hand, I only have good words for all of you who use and/or contribute to the project. Without you, it would be just another crazy idea from another crazy developer 😄

Thank you very much! 🙌

– Fran Mendez

When it comes to contributing to the AsyncAPI, Fran has laid out some pretty clear ways in which he needs our help, providing a range of options for you to pitch in and help, depending on what your skills are, and the bandwidth you have in your day.

1. The specification There is always work to do in the spec. It goes from fixing typos to writing and reviewing new proposals. I try to keep releases small, to give time to tooling authors to update their software. If you want to start contributing, take a look at https://github.com/asyncapi/asyncapi/issues, pick one, and start working on it. It’s always a good idea to leave a comment in the issue saying that you’re going to work on it, just so other people know about it.

2. Tooling As developers, this is sometimes the most straightforward way to contribute. Adding features to the existing tools or creating new ones if needed. Examples of tools are:

  • Code generators (multiple languages):
    • https://github.com/asyncapi/generator
    • https://github.com/asyncapi/node-codegen (going to be deprecated soon in favor of https://github.com/asyncapi/generator)
  • Documentation generators (multiple formats):
    • https://github.com/asyncapi/generator
    • https://github.com/asyncapi/docgen (going to be deprecated soon in favor of https://github.com/asyncapi/generator)
    • https://github.com/Mermade/widdershins
    • https://github.com/asyncapi/asyncapi-node
    • https://github.com/asyncapi/editor
  • Validation CLI tool (nobody implemented it yet)
  • API mocking (nobody implemented it yet)
  • API gateways (nobody implemented it yet)

As always, usually the best way to contribute is to pick an issue and chat about it before you create a pull request.

3. Evangelizing Sometimes the best way to help a project like AsyncAPI is to simply talk about it. It can be inside your company, in a technology meetup or speaking at a conference. I’ll be happy to help with whatever material you need to create or with arguments to convince your colleagues that using AsyncAPI is a good idea 😊

4. Documentation Oh! documentation! We’re trying to convince people that documenting your message-driven APIs is a good idea, but we lack documentation, especially in tooling. This is often a task nobody wants to do, but the best way to get great knowledge about a technology is to write documentation about it. It doesn’t need to be rewriting the whole documentation from scratch, but just identifying the questions you had when started using it and document them.

5. Tutorials We learn by examples. It’s a fact. Write tutorials on how to use AsyncAPI in your blog, Medium, etc. As always, count on me if you need ideas or help while writing or reviewing.

6. Stories You have a blog and write about the technology you use? Writing about success stories, how-to’s, etc., really helps people to find the project and decide whether they should bet on AsyncAPI or not.

7. Podcasts/Videos You have a Youtube channel or your own podcast? Talk about AsyncAPI. Tutorials, interviews, informal chats, discussions, panels, etc. I’ll be happy to help with any material you need or finding the right person for your interview.

I’m going to take the liberty and add an 8th option, because I’m so straightforward when it comes to this game, and I know where Fran needs help.

8. Money AsyncAPI needs investment to help push forward, allowing Fran to carve out time, work on tooling, and pay for travel expenses when it comes to attending events and getting the word out about what it does. There is no legal entity setup for AsyncAPI, but I’m sure with the right partner(s) behind it, we can make something happen. Step up.

AsyncAPI is important. We all need to jump in and help. I’ve been investing as many cycles as I can in helping learn about the specification, and tell stories about why it is important. I’ve been working hard to learn more about it so I can contribute to the roadmap. I’m using it as one of the key definition formats driving my Streamdata.io API Gallery work, which is all driven using APIs.json, OpenAPI, and provides Postman Collections as well as AsyncAPI definitions when a message, topic, event, or streaming API is present. AsyncAPI is where OpenAPI (Swagger) was in 2011/2012, and with more investment, and a couple more years of adoption and maturing, it will be just as important for working with the evolving API landscape as OpenAPI and Postman Collections are.

If you want to get involved with AsyncAPI, feel free to reach out to me. I’m happy to help you get up to speed on why it is so important. I’m happy to help you understand how it can be applied, and where it fits in with your API infrastructure. You are also welcome to just dive in, as Fran has done an amazing job of making sure everything is available in the Github organization for the project, where you can submit pull requests, and issues regarding whatever you are working on and contributing. Thanks for your help in making AsyncAPI evolve, and something that will continue to help us understand, quantify, and communicate about the diverse API landscape.


TVMaze Uses HAL For Their API Media Type

One of the layers of the API universe where I come across an increased number Hypermedia APIs is in the movie, television, and entertainment space. Where having a more flowing API experience makes a lot of sense, and the extra investment in link relations will pay off. One example of this I recently came across was over at TVMaze, who has a pretty robust hypermedia API, where they opted for using HAL as their media type.

Like any good hypermedia should, TVMaze begins with its root URL: http://api.tvmaze.com, and provides a robust set of endpoints from there:

Search</a>

Schedule</a>

Shows</a>

People</a>

Updates</a>

The TVMaze API isn’t an overly complex hypermedia API. I think it is simple, elegant, and shows how you can use link relations to establish a more meaningful experience for API consumers. Allowing you to navigate the large, ever-changing catalog of television shows, allowing the API client to do the heavy lifting of navigating the shows, schedules, and people involved with each production.

There hasn’t been enough showcasing of the hypermedia APIs available out there. Usually once a year I remember to give the subject some attention, or when I come across interesting ones like TVMaze. Hypermedia isn’t just an academic idea anymore, and is something that has gotten traction in a number of sectors, and I keep seeing signs of growth and adoption. I don’t think it will be the API solution most hypermedia believers envisioned it, but I do think it is a viable tool in our API toolbox, and for the right projects it makes a lot of sense.


If A Search For Swagger or OpenAPI Does Yield Results I Try For A Postman Collection Next

While profiling any company, a couple of the Google searches I will execute right away are for “[Company Name] Swagger” and “[Company Name] OpenAPI”, hoping that a provide is progressive enough to have published an OpenAPI definition–saving me hours of work understanding what their API does. I’ve added a third search to my toolbox, if these other two searches do not yield results, searching for “[Company Name] Postman”, revealing whether or not a company has published a Postman Collection for their API–another sign of a progressive, outward thinking API provider in my book.

A machine readable definition for an API tells me more about what a company, organization, institution, or government agency does, than anything else I can dig up on their website, or social media profiles. An OpenAPI definition or Postman Collection is a much more honest view of what an organization does, than the marketing blah blah that is often available on a website. Making machine readable definitions something I look for almost immediately, and prioritize profiling, reviewing, and understanding the entities I come across with a machine readable definition, over those that do not. I only have so much time in a day, and I will prioritize an entity with an OpenAPI or Postman, over those who do not.

The presence of an OpenAPI and / or Postman Collection isn’t just about believing in the tooling benefits these definitions provide. It is about API providers thinking externally about their API consumers. I’ve met a lot of API providers who are dismissive of these machine readable definitions as trends, which demonstrates they aren’t paying attention to the wider API space, and aren’t thinking about how they can make their API consumers lives easier–they are focused on doing what they do. In my experience these API programs tend to not grow as fast, focus on the needs of their integrators and consumers, and often get shut down after they don’t get the results they thought they’d see. APIs are all about having that outward focus, and the presence of OpenAPI and Postman Collection are a sign that a provider is looking outward.

While I’m heavily invested in OpenAPI (I am member), I’m also invested in Postman. More importantly, I’m invested in supporting well defined APIs that provide solutions to developers. When an API has an OpenAPI for delivering mocks, documentation, testing, monitoring, and other solutions, and they provide a Postman Collection that allows you to get up an running making API calls in seconds or minutes, instead of hours or days–it is an API I want to know more about. Making these potential searches the deciding factor between whether or not I will continue profiling and reviewing an API, or just flagging it for future consideration, and moving on to the next API in the queue. I can’t keep up with the number of APIs I have in my queue, and it is signals like this that help me prioritize my world, and get my work done on a regular basis.


People Do Not Use Tags In Their OpenAPI Definitions

I import and work with a number of OpenAPI definitions that I come across in the wild. When I come across a version 1.2, 2.0, 3.0 OpenAPI, I import them into my API monitoring system for publishing as part of my research. After the initial import of any OpenAPI definition, the first thing I look for is the consistent in the naming of paths, the availability of summary, descriptions, as well as tags. The naming conventions used is paths is all over the place, some are cleaner than others. Most have a summary, with fewer having descriptions, but I’d say about 80% of them do not have any tags available for each API path.

Tags for each API path are essential to labeling the value a resource delivers. I’m surprised that API providers don’t see the need for applying these tags. I’m guessing it is because they don’t have to work with many external APIs, and really haven’t put much thought into other people working with their OpenAPI definition beyond it just driving their own documentation. Many people still see OpenAPI as simply a driver of API documentation on their portal, and not as an API discovery, or complete lifecycle solution that is portable beyond their platform. Not considering how tags applied to each API resource will help others index, categorize, and organize APIs based upon the value in delivers.

I have a couple of algorithms that help me parse the path, summary, and description to generate tags for each path, but it is something I’d love for API providers to think more deeply about. It goes beyond just the resources available via each path, and the tags should reflect the overall value an API delivers. If it is a product, event, messaging, or other resource, I can extract a tag from the path, but the path doesn’t always provide a full picture, and I regularly find myself adding more tags to each API(if I have the time). This means that many of the APIs I’m profiling, and adding to my API Stack, API Gallery, and other work isn’t as complete with metadata as they possibly could be. Something API providers should be more aware of, and helping define as part of their hand crafting, or auto-generation of OpenAPI definitions.

It is important for API providers to see their OpenAPI definitions as more than just a localized, static feature of their platforms, and as a portable definition that will be used by 3rd party API service providers, as well as their API consumers. They should be linking their OpenAPI prominently from your API documentation, and not hiding behind the JavaScript voodoo that generates your docs. They should be making sure OpenAPI definitions are as complete as you possibly can, with as much metadata as possible, describing the value that it delivers. Loading up OpenAPI definitions into a variety of API design, documentation, discovery, testing, and other tooling to see what it looks like and how it behaves. API providers will find that tags are beginning to be used for much more than just grouping of paths in your API documentation, and it is how gateways are organizing resources, management solutions are defining monetization and billing, and API discovery solutions are using to drive their API search solutions–to just point out a couple of ways in which they are used.

Tag your APIs as part of your OpenAPI definitions! I know that many API providers are still auto-generating from a system, but once they have published the latest copy, make sure you load up in one of the leading API design tools, and give that last little bit of polish. Think of it as that last bit of API editorial workflow that ensures your API definitions speak to the widest possible audience, and are as coherent as it possibly can. Your API definitions tell a story about the resources you are making available, and the tags help provide a much more precise way to programmatically interpret what APIs actually deliver. Without them APIs might not properly show up in search engine and Github searches, or render coherently in other API services and tooling. OpenAPI tags are an essential part of defining and organizing your API resources–give them the attention they deserve.


Using OpenAPI And JSON PATCH To Articulate Changes For Your API Road Map

I’m doing a lot of thinking regarding how JSON PATCH can be applied because of my work with Streamdata.io. When you proxy an existing JSON API with Streamdata.io, after the initial response, every update sent over the wire is articulated as a JSON PATCH update, showing only what has changed. It is an efficient, and useful way to show what has changed with any JSON API response, while being very efficient about what you transmit with each API response, reducing polling, and taking advantage of HTTP caching.

As I’m writing an OpenAPI diff solution, helping understand the differences between OpenAPI definitions I’m importing, and allowing me to understand what has changed over time, I can’t help but think that JSON PATCH would be a great way to articulate change of the surface area of an API over time–that is, if everyone loyally used OpenAPI as their API contract. Providing an OpenAPI diff using JSON PATCH would be a great way to articulate an API road map, and tooling could be developed around it to help API providers publish their road map to their portal, and push out communications with API consumers. Helping everyone understand exactly what is changing in way that could be integrated into existing services, tooling, and systems–making change management a more real time, “pipelinable” (making this word up) affair.

I feel like this could help API providers better understand and articulate what might be breaking changes. There could be tooling and services that help quantify the scope of changes during the road map planning process, and teams could submit OpenAPI definitions before they ever get to work writing code, helping them better see how changes to the API contract will impact the road map. Then the same tooling and services could be used to articulate the road map to consumers, as the road map becomes approved, developed, and ultimately rolled out. With each OpenAPI JSON PATCH moving from road map to change log, keeping all stakeholders up to speed on what is happening across all API resources they depend on–documenting everything along the way.

I am going to think more about this as I evolve my open API lifecycle. How I can iterate a version of my OpenAPI definitions, evaluate the difference, and articulate each update using JSON PATCH. Since more of my API lifecycle is machine readable, I’m guessing I’m going to be able to use this approach beyond just the surface area of my API. I’m going to be able to use it to articulate the changes in my API pricing and plans, as well as licensing, terms of service, and other evolving elements of my operations. It is a concept that will take some serious simmering on the back burners of my platform, but a concept I haven’t been able to shake. So I might as well craft some stories about the approach, and see what I can move forward as I continue to define, design, and iterate on the APIs that drive my platform and API research forward.


Not Liking OpenAPI (fka Swagger) When You Have No Idea What It Does

People love to hate in the API space. Ok, I guess its not exclusive to the API space, but it is a significant aspect of the community. I receive a regular amount of people hating on my work, for no reason at all. I also see people doing it to others in the API space on a regular basis. It always makes me sad to see, and have always worked to try to be as nice as I can to counteract the male negativity and competitive tone that often exists. While I feel bad for the people on the receiving end of all of this, I often times feel bad for the people on the giving end of things, as they are often not the most informed and up to speed folks, who seem to enjoy opening their mouth before they understand what is happening.

One thing I notice regularly, is that these same people like to bash on is OpenAPI (fka Swagger). I regularly see people (still) say how bad of an idea it is, and how it has done nothing for the API space. One common thread I see with these folks, which prevents me from saying anything to them, is that it is clear they really don’t have an informed view of what OpenAPI is. Most people spend a few minutes looking it, maybe read a few blog posts, and then establish their opinions about what it is, or what it isn’t. I regularly find people who are using it as part of their work, and don’t actually understand the scope of the specification and tooling, so when someone is being vocal about it and doesn’t use actually it, it is usually pretty clear pretty quickly how uninformed they are about the specification, tooling, and scope of the community.

I’ve been tracking on it since 2011, and I still have trouble finding OpenAPI specifications, and grasping all of the ways it is being used. When you are a sideline pundit, you are most likely seeing about 1-2% of what OpenAPI does–I am a full time pundit in the game and I see about 60%. The first sign that someone isn’t up to speed is they still call it Swagger. The second sign is they often refer to it as documentation. Thirdly, they often refer to code generation with Swagger as a failure. All three of these views date someone’s understanding to about a 2013 level. If someone is forming assumptions, opinions, and making business decisions about OpenAPI, and being public about it, I’d hate to see what the rest of their technology views look like. In the end, I just don’t even feel like picking on them, challenging them on their assumptions, because their regular world is probably already kicking their ass on a regular basis–no assistance is needed.

I do not feel OpenAPI is the magical solution to fix all the challenges the API space, but it does help reduce friction at almost every stop along the API lifecycle. In my experience, 98% of the people who are hating on it do not have a clue what OpenAPI is, or what it does. I used to challenge folks, and try to educate them. Over the years I’ve converted a lot of folks from skeptics to believers, but in 2018, I think I’m done. If someone is openly criticizing it, I’m guessing it is more about their relationship to tech, and their lack of awareness of delivering APIs at scale, and they probably exist in a pretty entrenched position because of their existing view of the landscape–they don’t need me piling on. However, if people aren’t aware of the landscape, and ask questions about how OpenAPI works, I’m always more than happy to help open their eyes to how the API definition is serving almost every stop along the API lifecycle from design to deprecation, and everything in between.


The Importance Of OpenAPI Tooling

In my world, OpenAPI is always a primary actor, and the tooling and services that put it to work are always secondary. However, I’d say that 80% of the people I talk with are the opposite, putting OpenAPI tooling in a primary role, and the OpenAPI specification in a secondary role. This is the primary reason that many still see Swagger tooling as the value, and haven’t made the switch to the concept of OpenAPI, or understand the separation between the specification and the tooling.

Another way in which you can see the importance of OpenAPI tooling is the slow migration of OpenAPI 2.0 to 3.0 users. Many folks I’ve talked to about OpenAPI 3.0 tell me that they haven’t made the jump because of the lack of tooling available for the specification. This isn’t always about the external services and tooling that supports OpenAPI 3.0, it is also about the internal tooling that supports it. It demonstrates the importance of tooling when it comes to the evolution, and adoption of OpenAPI. It demonstrates the need for the OAI community to keep investing in the development and evangelism of tooling for the latest version.

I am going to work to invest more time into rounding up OpenAPI tooling, and getting to know the developers behind them, as I prepare APIStrat in Nashville, TN. I’m also going to invest in my own migration to OpenAPI 3.0. The reason I haven’t evolved isn’t because of lack tooling, it is because of a lack of time, and the cognitive load involved with thinking new ways. I fully grasp the differences between 2.0 and 3.0, but I just don’t have intuitive knowledge of 3.0 in the way I do for 2.0. I’ve spent hundreds of hours developing around 2.0, and I just don’t have the time in my schedule to make similar investment in 3.0–soon!

If you need to get up to speed on the latest when it comes to OpenAPI 3.0 tooling I recommend checking out OpenAPI.Tools from Matt Trask (@matthewtrask) and Crashy McCiderface (aka Phil Sturgeon) (@philsturgeon). It is the best source of OpenAPI tooling out there right now. If you are still struggling with the migration from 2.0 to 3.0, or would like to see a specific solution developed on top of OpenAPI 3.0, I’d love to hear from you. I’m working to help shape the evolution of the OpenAPI tooling conversation, as well as tell stories about what tools are available, or should be available, and how they are can be put to work on the ground at companies, organizations, institutions, and government agencies.


Opportunity For OpenAPI-Driven Open Source Testing, Performance, Security, And Other Modules

I’ve been on five separate government reflated projects lately where finding modular OpenAPI-driven open source tooling has been a top priority. All of these projects are microservice-focused and OpenAPI-driven, and are investing significant amounts of time looking open source tools that will help with design governance, monitoring, testing, and security, and interact with the Jenkins pipeline. Helping government agencies find success as their API journey picks up speed, and the number of APIs grows exponentially.

Selling to the federal government can be a long journey in itself. They can’t always use the SaaS solutions many of us fire up to get the job done in our startup or enterprise lives. Increasingly government agencies are depending on open source solutions to help them move projects forward. Every agency I’m working with is using OpenAPI (Swagger) to drive their API lifecycle. While not all have gone design (define) first, they are using them as the contract for mocking, documentation, testing, monitoring, and security. The teams I’m working with are investing a lot of energy looking for, vetting, and testing out different open source modules on Github–with varying degrees of success.

Ideally, there was an OpenAPI-driven marketplace, or federated set of marketplaces like OpenAPI.Tools. I’ve had one for a while, but haven’t kept up to date–I will invest some time / resources into it soon. My definition of an OpenAPI tool marketplace would be that it is OpenAPI-driven, and open source. I’m fine with there being other marketplaces of OpenAPI-driven services, but I want a way to get at just the actively maintained open source tools. When it comes to serving government this is an important, and meaningful distinction. I’d also like to encourage many of the project owners to ensure there is CI/CD integration, as well as make sure their projects are actively supported, and they are willing to entertain commercial implementations.

While there wouldn’t always be direct commercial opportunities for open source tooling owners to engage with federal agencies, there would be through contractors and subcontractors. Working for federal agencies is a maze of forms and hoop jumping, but working with contractors can be pretty straightforward if you find the right ones. I don’t think you will get rich developing OpenAPI-driven tooling that serves the API lifecycle, but I think with the right solutions, support, and team behind them, you can make a decent living developing them. Especially as the lifecycle expands, and the number of services being delivered grows, the need for specialized, OpenAPI-driven tools to apply across the API lifecycle is only going to increase. Making it something I’ll be writing more stories about as I hear more stories from the API trenches.

I’m going to try and spend time working with Phil Sturgeon (@philsturgeon) and Matt Trask (@matthewtrask) on API.Tools, as well as give my own toolbox some love. If you have an open source OpenAPI-driven tool you’d like to get some attention feel free to ping me, and make sure its part of API.Tools. Also, if you have a directory, catalog, or marketplace of tools you’d like to showcase, ping me as well, I’m all about supporting diversity of choice in the space. I have multiple federal agencies ear right now when it comes to delivering along the API lifecycle, and I’m happy to point agencies and their contractors to specific tools, if it makes sense. Like I said, there won’t always be direct revenue opportunities, but they are implementations that will undoubtedly lead to commercial opportunities in the form of consulting, advising, and development opportunities with the contractors and subcontractors who are delivering on federal agency contracts.


Looking At 20 Microservices In Concert

I checked out the Github repositories for twenty microservices of one of my clients recently, looking understand what is being accomplished across all these services as they work independently to accomplish a single collective objective. I’m being contracted with to help come in blindly and provide feedback on the design of the APIs being exposed across services, and help provide guidance on their API lifecycle, as well as eventually API governance when things have matured to that level. Right now we are addressing pretty fundamental definition and design issues, but eventually we’ll hopefully graduate to the next level.

A complete and up to date README for each microservice is essential to understanding what is going on with a service, and a robust OpenAPI definition is critical to breaking down the details of what each API delivers. When you aren’t part of each service’s development team it can be difficult to understand what each service does, but with an up to date README and OpenAPI, you can get up to speed pretty quickly. If an service is well documented via its README, and the API is well designed, and the surface area is reflected in it’s OpenAPI, you can go from not knowing what a service does to, understanding its value within hopefully minutes, not hours.

When each service possesses an OpenAPI it becomes possible to evaluate what they deliver at scale. You can take all APIs, their paths, headers, parameters, and schema and out them in different ways so that you can begin to paint a picture of what they deliver in aggregate. Bringing all the disparate services back together to perform together in a sort of monolith concert, while still acknowledging they all do their own thing independently. Allowing us to look at how many different service can be used in concert to deliver a single application, or potentially a variety of application instances. Thinking critically about each independent service, but more importantly how they all work together.

I feel like many groups are still struggling with decomposing their monolithic systems into separate services, and while some are doing so in a domain-driven way, few are beginning to invest in understanding how they move forward with services in concert to deliver on application needs. Many of the groups I’m working with are so focused on decomposing and tearing down, they aren’t thinking too critically about how they will make all of this begin work together again. I see monolith systems working like a massive church organ which take a lot of maintenance, and require a single (or handful) of knowledgeable operators to play. Where microservices are much more like an orchestra, where every individual player has a role, but they play in concert, directed by a conductor. I feel like most groups I’m talking with are just beginning the process of hiring a conductor, and have a bunch of musicians roaming around–not quite ready to play any significant productions quite yet.


API Discovery is for Internal or External Services

The topic of API discovery has been picking up momentum in 2018. It is something I’ve worked on for years, but with the number of microservices emerging out there, it is something I’m seeing become a concern amongst providers. It is also something I’m seeing more potential vendor chatter, looking to provide more services and tooling to help alleviate API discovery pain. Even with all this movement, there is still a lot of education and discussion to occur on the subject to help bring people up to speed on what is API discovery.

The most common view of what is API discovery, is when you need to find an API for developing an application. You have a need for a resource in your application, and you need to look across your internal and partner resources to find what you are looking for. Beyond that, you will need to search for publicly available API resources, using Google, Github, ProgrammableWeb, and other common ways to find popular APIs. This is definitely the most prominent perspective when it comes to API discovery, but it isn’t the only dimension of this problem. There are several dimensions to this stop along the API lifecycle, that I’d like to flesh out further, so that I can better articulate across conversations I am having.

Another area that gets lumped in with API discovery is the concept of service discovery, or how your APIs will find their backend services that they use to make the magic happen. Service discovery focuses on the initial discovery, connectivity, routing, and circuit breaker patterns involved with making sure an API is able to communicate with any service it depends on. With the growth of microservices there are a number of solutions like Consul that have emerged, and cloud providers like AWS are evolving their own service discovery mechanisms. Providing one dimension to the API discovery conversation, but different from, and often confused with front-end API discovery and how developers and applications find services.

One of the least discussed areas of API discovery, but is one that is picking up momentum, is finding APIs when you are developing APIs, to make sure you aren’t building something that has already been developed. I come across many organizations who have duplicate and overlapping APIs that do similar things due to lack of communication and a central directory of APIs. I’m getting asked by more groups regarding how they can be conducting API discovery by default across organizations, sniffing out APIs from log files, on Github, and other channels in use by existing development teams. Many groups just haven’t been good at documenting and communicating around what has been developed, as well as beginning new projects without seeing what already exists–something that will only become a great problem as the number of microservices grows.

The other dimension of API discovery I’m seeing emerge is discovery in the service of governance. Understand what APIs exist across teams so that definitions, schema, and other elements can be aggregated, measured, secured, and governed. EVERY organization I work with is unaware of all the data sources, web services, and APIs that exist across teams. Few want to admit it, but it is a reality. The reality is that you can’t govern or secure what you don’t know you have. Things get developed so rapidly, and baked into web, mobile, desktop, network, and device applications so regularly, that you just can’t see everything. Before companies, organizations, institutions, and government agencies are going to be able to govern anything, they are going to have begin addressing the API discovery problem that exists across their teams.

API discovery is a discipline that is well over a decade old. It is one I’ve been actively working on for over 5 years. It is something that is only now getting the discussion it needs, because it is a growing concern. It will be come a major concern with each passing day of the microservice evolution. People are jumping on the microservices bandwagon without any coherent way to organize schema, vocabulary, or API definitions. Let alone any strategy for indexing, cataloging, sharing, communicating, and registering services. I’m continuing my work on APIs.json, and the API Stack, as well as pushing forward my usage of OpenAPI, Postman, and AsyncAPI, which all contribute to API discovery. I’m going to continue thinking about how we can publish open source directories, catalogs, and search engines, and even some automated scanning of logs and other ways to conduct discovery in the background. Eventually, we will begin to find more solutions that work–it will just take time.


Making the OpenAPI Contract Friendlier For Developers and Business Stakeholders

I was in a conference session about an API design tool today, and someone asked if you could get at the OpenAPI definition behind the solution. They said yes, but quickly also said that the definition is boring and that you don’t want to be in there, you want to be in the interface. I get that service providers want you to focus on their interface, but we shouldn’t be burying, or abstracting away the API contract for APIs, we should always be educating people about it, an bringing it front and center in any service, tooling, or conversation.

Technology folks burying or devaluing the OpenAPI definition with business users is common, but I also see technology folks doing it to each other. Reducing OpenAPI to be just another machine readable artifact alongside other components of delivering API infrastructure today. I think this begins with people not understanding what OpenAPI is, but I think it is sustained by people’s view of what is technological magic and should remain in the hands of the wizards, and what should be accessible to a wider audience. If you limit who has access and knowledge, you can usually maintain a higher level of control, so they use your interface in the case of a vendor, or they come to you develop and build an API in the case of a developer.

There is nothing in a YAML OpenAPI definition that business users won’t be able to understand. OpenAPIs aren’t anymore boring than a Word document or Spreadsheet. If you are a stakeholder in the service, you should be able to read, understand, and engage with the OpenAPI contract. If we teach people to be afraid of the OpenAPI definitions we are repeating the past, and maintaining the canyon that can exist between business and IT/Developer groups. If you are in the business of burying the OpenAPI definition, I’m guessing you don’t understand the portable API lifecycle potential of this API contract, and simply see it as a config, documentation, or other technical artifiact. Or you are just in the business of maintaining control and power by being the gatekeeper for the API contract, similar to how we see database people defend their domain.

Please do not devalue or hide away the OpenAPI contract. It isn’t your secret sauce. It isn’t boring. It isn’t too technical. It is the contract for how a service will work, that will speak to business and technical groups. It is the contract that all the services and tools you will use along the API lifecycle will understand. It is fine to have the OpenAPI right behind the scenes, but always provide a button, link, or other way to quickly see the latest version, and definitely do not scare people away or devalue it when you are talking. If you are doing APIs, you should be encouraging, and investing in everyone being able to have a conversation around the API contract behind any service you are putting forward.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.