RSS

API Definitions News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is defining not just their APIs, but their schema, and other moving parts of their API operations.

Describing Your API with OpenAPI 3.0 by Anthony Eden (@aeden), DNSimple (@dnsimple) At @APIStrat In Nashville

We are getting closer to the 9th edition of APIStrat happening in Nashville, TN this September 24th through 26th. The schedule for the conference is up, along with the first lineup of keynote speakers, and my drumbeat of stories about the event continues here on the blog. Next up in our session lineup is “Describing Your API with OpenAPI 3.0” by Anthony Eden (@aeden), DNSimple (@dnsimple) on September 25th.

Here is Christian’s abstract for the session:

For the last 10 years, DNSimple has operated a comprehensive web API for buying, connecting, and operating domain names. After hearing about OpenAPI at APIStrat 2017, we decided to describe the DNSimple API using the OpenAPI v3 specification - this is the story of why we did it, how we did it, and where we are today.

By the end of this presentation you will have the tools you’ll need to evaluate your own API and decide if implementing OpenAPI makes sense for you, and if so, how you can get started. You’ll have a better understanding of the tools available to you to help write your OpenAPI 3 definition, as well the basics on how to write your own definition for your APIs.

We are all still working to make the switch from OpenAPI 2.0 to 3.0, and with APIStrat being owned and operated by the OpenAPI Initiative, it will definitely be the place to have face to face discussions that influence the road map for the API specification. You can register for the event here, and there are still sponsorship opportunities available. Don’t miss out on APIStrat this year–it is going to be a good time in Nashville as we continue the conversation we started back in 2012 with the initial edition of the API industry event in New York City.

I am looking forward to seeing you all in Nashville next month!


Working With My OpenAPI Definitions In An API Editor Helps Stabilize Them

I’m deploying three new APIs right now, using a new experimental serverless approach I’m evolving. One is a location API, another providing API access to companies, and the third involves working with patents. I will be evolving these three simple web APIs to meet the specific needs of some applications I’m building, but then I will also be selling retail and wholesale access to each API once they’ve matured enough. With all three APIs of these APIs, I began with a simple JSON schema from the data source, which I used to generate three rough OpenAPI definitions that will acts the contract seed for my three services.

Once I had three separate OpenAPI contracts for the services I was delivering, I wanted to spend some time hand designing each of the APIs before I imported into AWS API Gateway, generating Lambda functions, loading in Postman, and used to support other stops along the API lifecycle. I still use a localized version of Swagger Editor for my OpenAPI design space, but I’m working to migrate to OpenAPI-GUI as soon as I can. I still very much enjoy the side by side design experience in Swagger Editor, but I want to push forward the GUI side of the conversation, while still retaining quick access to the RAW OpenAPI for editing.

One of the reasons why I still use Swagger Editor is because of the schema validation it does behind the scenes. Which is one of the reasons I need to learn more about Speccy, as it is going to help me decouple validation from my editor, and all me to use it as part of my wider governance strategy, not just at design time. However, for now I am highly dependent on my OpenAPI editor helping me standardize and stabilize my OpenAPI definitions, before I use them along other stops along the API lifecycle. These three APIs I’m developing are going straight to deployment, because they are simple datasets, where I’m the only consumer (for now), but I still need to make sure my API contract is solid before I move to other stops along the API lifecycle.

Right now, loading up an OpenAPI in Swagger Editor is the best sanity check I have. Not just making sure everything validates, but also making sure it is all coherent, and renders into something that will make sense to anyone reviewing the contract. Once I’ve spend some time polishing the rough corners of an OpenAPI, adding summary, descriptions, tags, and other detail, I feel like I can begin using to generate mocks, deploy in a gateway, and begin managing the access to each API, as well as the documentation, testing, monitoring, and other stops using the OpenAPI contract. Making this manual stop in the evolution of my APIs a pretty critical one for helping me stabilize each API’s definition before I move on. Eventually, I’d like to automate the validation and governance of my APIs at scale, but for now I’m happy just getting a handle on it as part of this API design stop along my life cycle.


We Need Your Help Moving The AsyncAPI Specification Forward

We need your help moving the AsyncAPI specification forward. Ok, first, what is the AsyncAPI specification? “The AsyncAPI Specification is a project used to describe and document Asynchronous APIs. The AsyncAPI Specification defines a set of files required to describe such an API. These files can then be used to create utilities, such as documentation, integration and/or testing tools.” AsyncAPI is a sister specification to OpenAPI, but instead of describing the request and response HTTP API landscape, AsyncAPI is describing the message, topic, event, and streaming API landscape across the HTTP and TCP landscape. It is how we are going to continue to ensure there is machine readable descriptions of this portion of the API landscape, for use in tooling and services.

My friend Fran Mendez (@fmvilas) is the creator and maintainer of the specification, and he is doing way too much of the work on this important specification and he needs our help. Here is Fran’s request for our help to contribute:

AsyncAPI is an open source project that’s currently maintained by me, with no company or funds behind. More and more companies are using AsyncAPI and the work needed is becoming too much work for a single person working in his spare time. E.g., for each release of the specification, tooling and documentation should be updated. One could argue that I should be dedicating full time to the project, but it’s in this point where it’s too much for spare time and very little to get enough money to live. I want to keep everything for free, because I firmly believe that engineering must be democratized. Also, don’t get me wrong, this is not a complaint. I’m going to continue running the project either with or without contributors, because I love it. This is just a call-out to you, the AsyncAPI lover. I’d be very grateful if you could lend a hand, or even raise your hand and become a co-maintainer. Up to you 😊

On the other hand, I only have good words for all of you who use and/or contribute to the project. Without you, it would be just another crazy idea from another crazy developer 😄

Thank you very much! 🙌

– Fran Mendez

When it comes to contributing to the AsyncAPI, Fran has laid out some pretty clear ways in which he needs our help, providing a range of options for you to pitch in and help, depending on what your skills are, and the bandwidth you have in your day.

1. The specification There is always work to do in the spec. It goes from fixing typos to writing and reviewing new proposals. I try to keep releases small, to give time to tooling authors to update their software. If you want to start contributing, take a look at https://github.com/asyncapi/asyncapi/issues, pick one, and start working on it. It’s always a good idea to leave a comment in the issue saying that you’re going to work on it, just so other people know about it.

2. Tooling As developers, this is sometimes the most straightforward way to contribute. Adding features to the existing tools or creating new ones if needed. Examples of tools are:

  • Code generators (multiple languages):
    • https://github.com/asyncapi/generator
    • https://github.com/asyncapi/node-codegen (going to be deprecated soon in favor of https://github.com/asyncapi/generator)
  • Documentation generators (multiple formats):
    • https://github.com/asyncapi/generator
    • https://github.com/asyncapi/docgen (going to be deprecated soon in favor of https://github.com/asyncapi/generator)
    • https://github.com/Mermade/widdershins
    • https://github.com/asyncapi/asyncapi-node
    • https://github.com/asyncapi/editor
  • Validation CLI tool (nobody implemented it yet)
  • API mocking (nobody implemented it yet)
  • API gateways (nobody implemented it yet)

As always, usually the best way to contribute is to pick an issue and chat about it before you create a pull request.

3. Evangelizing Sometimes the best way to help a project like AsyncAPI is to simply talk about it. It can be inside your company, in a technology meetup or speaking at a conference. I’ll be happy to help with whatever material you need to create or with arguments to convince your colleagues that using AsyncAPI is a good idea 😊

4. Documentation Oh! documentation! We’re trying to convince people that documenting your message-driven APIs is a good idea, but we lack documentation, especially in tooling. This is often a task nobody wants to do, but the best way to get great knowledge about a technology is to write documentation about it. It doesn’t need to be rewriting the whole documentation from scratch, but just identifying the questions you had when started using it and document them.

5. Tutorials We learn by examples. It’s a fact. Write tutorials on how to use AsyncAPI in your blog, Medium, etc. As always, count on me if you need ideas or help while writing or reviewing.

6. Stories You have a blog and write about the technology you use? Writing about success stories, how-to’s, etc., really helps people to find the project and decide whether they should bet on AsyncAPI or not.

7. Podcasts/Videos You have a Youtube channel or your own podcast? Talk about AsyncAPI. Tutorials, interviews, informal chats, discussions, panels, etc. I’ll be happy to help with any material you need or finding the right person for your interview.

I’m going to take the liberty and add an 8th option, because I’m so straightforward when it comes to this game, and I know where Fran needs help.

8. Money AsyncAPI needs investment to help push forward, allowing Fran to carve out time, work on tooling, and pay for travel expenses when it comes to attending events and getting the word out about what it does. There is no legal entity setup for AsyncAPI, but I’m sure with the right partner(s) behind it, we can make something happen. Step up.

AsyncAPI is important. We all need to jump in and help. I’ve been investing as many cycles as I can in helping learn about the specification, and tell stories about why it is important. I’ve been working hard to learn more about it so I can contribute to the roadmap. I’m using it as one of the key definition formats driving my Streamdata.io API Gallery work, which is all driven using APIs.json, OpenAPI, and provides Postman Collections as well as AsyncAPI definitions when a message, topic, event, or streaming API is present. AsyncAPI is where OpenAPI (Swagger) was in 2011/2012, and with more investment, and a couple more years of adoption and maturing, it will be just as important for working with the evolving API landscape as OpenAPI and Postman Collections are.

If you want to get involved with AsyncAPI, feel free to reach out to me. I’m happy to help you get up to speed on why it is so important. I’m happy to help you understand how it can be applied, and where it fits in with your API infrastructure. You are also welcome to just dive in, as Fran has done an amazing job of making sure everything is available in the Github organization for the project, where you can submit pull requests, and issues regarding whatever you are working on and contributing. Thanks for your help in making AsyncAPI evolve, and something that will continue to help us understand, quantify, and communicate about the diverse API landscape.


TVMaze Uses HAL For Their API Media Type

One of the layers of the API universe where I come across an increased number Hypermedia APIs is in the movie, television, and entertainment space. Where having a more flowing API experience makes a lot of sense, and the extra investment in link relations will pay off. One example of this I recently came across was over at TVMaze, who has a pretty robust hypermedia API, where they opted for using HAL as their media type.

Like any good hypermedia should, TVMaze begins with its root URL: http://api.tvmaze.com, and provides a robust set of endpoints from there:

Search</a>

Schedule</a>

Shows</a>

People</a>

Updates</a>

The TVMaze API isn’t an overly complex hypermedia API. I think it is simple, elegant, and shows how you can use link relations to establish a more meaningful experience for API consumers. Allowing you to navigate the large, ever-changing catalog of television shows, allowing the API client to do the heavy lifting of navigating the shows, schedules, and people involved with each production.

There hasn’t been enough showcasing of the hypermedia APIs available out there. Usually once a year I remember to give the subject some attention, or when I come across interesting ones like TVMaze. Hypermedia isn’t just an academic idea anymore, and is something that has gotten traction in a number of sectors, and I keep seeing signs of growth and adoption. I don’t think it will be the API solution most hypermedia believers envisioned it, but I do think it is a viable tool in our API toolbox, and for the right projects it makes a lot of sense.


If A Search For Swagger or OpenAPI Does Yield Results I Try For A Postman Collection Next

While profiling any company, a couple of the Google searches I will execute right away are for “[Company Name] Swagger” and “[Company Name] OpenAPI”, hoping that a provide is progressive enough to have published an OpenAPI definition–saving me hours of work understanding what their API does. I’ve added a third search to my toolbox, if these other two searches do not yield results, searching for “[Company Name] Postman”, revealing whether or not a company has published a Postman Collection for their API–another sign of a progressive, outward thinking API provider in my book.

A machine readable definition for an API tells me more about what a company, organization, institution, or government agency does, than anything else I can dig up on their website, or social media profiles. An OpenAPI definition or Postman Collection is a much more honest view of what an organization does, than the marketing blah blah that is often available on a website. Making machine readable definitions something I look for almost immediately, and prioritize profiling, reviewing, and understanding the entities I come across with a machine readable definition, over those that do not. I only have so much time in a day, and I will prioritize an entity with an OpenAPI or Postman, over those who do not.

The presence of an OpenAPI and / or Postman Collection isn’t just about believing in the tooling benefits these definitions provide. It is about API providers thinking externally about their API consumers. I’ve met a lot of API providers who are dismissive of these machine readable definitions as trends, which demonstrates they aren’t paying attention to the wider API space, and aren’t thinking about how they can make their API consumers lives easier–they are focused on doing what they do. In my experience these API programs tend to not grow as fast, focus on the needs of their integrators and consumers, and often get shut down after they don’t get the results they thought they’d see. APIs are all about having that outward focus, and the presence of OpenAPI and Postman Collection are a sign that a provider is looking outward.

While I’m heavily invested in OpenAPI (I am member), I’m also invested in Postman. More importantly, I’m invested in supporting well defined APIs that provide solutions to developers. When an API has an OpenAPI for delivering mocks, documentation, testing, monitoring, and other solutions, and they provide a Postman Collection that allows you to get up an running making API calls in seconds or minutes, instead of hours or days–it is an API I want to know more about. Making these potential searches the deciding factor between whether or not I will continue profiling and reviewing an API, or just flagging it for future consideration, and moving on to the next API in the queue. I can’t keep up with the number of APIs I have in my queue, and it is signals like this that help me prioritize my world, and get my work done on a regular basis.


People Do Not Use Tags In Their OpenAPI Definitions

I import and work with a number of OpenAPI definitions that I come across in the wild. When I come across a version 1.2, 2.0, 3.0 OpenAPI, I import them into my API monitoring system for publishing as part of my research. After the initial import of any OpenAPI definition, the first thing I look for is the consistent in the naming of paths, the availability of summary, descriptions, as well as tags. The naming conventions used is paths is all over the place, some are cleaner than others. Most have a summary, with fewer having descriptions, but I’d say about 80% of them do not have any tags available for each API path.

Tags for each API path are essential to labeling the value a resource delivers. I’m surprised that API providers don’t see the need for applying these tags. I’m guessing it is because they don’t have to work with many external APIs, and really haven’t put much thought into other people working with their OpenAPI definition beyond it just driving their own documentation. Many people still see OpenAPI as simply a driver of API documentation on their portal, and not as an API discovery, or complete lifecycle solution that is portable beyond their platform. Not considering how tags applied to each API resource will help others index, categorize, and organize APIs based upon the value in delivers.

I have a couple of algorithms that help me parse the path, summary, and description to generate tags for each path, but it is something I’d love for API providers to think more deeply about. It goes beyond just the resources available via each path, and the tags should reflect the overall value an API delivers. If it is a product, event, messaging, or other resource, I can extract a tag from the path, but the path doesn’t always provide a full picture, and I regularly find myself adding more tags to each API(if I have the time). This means that many of the APIs I’m profiling, and adding to my API Stack, API Gallery, and other work isn’t as complete with metadata as they possibly could be. Something API providers should be more aware of, and helping define as part of their hand crafting, or auto-generation of OpenAPI definitions.

It is important for API providers to see their OpenAPI definitions as more than just a localized, static feature of their platforms, and as a portable definition that will be used by 3rd party API service providers, as well as their API consumers. They should be linking their OpenAPI prominently from your API documentation, and not hiding behind the JavaScript voodoo that generates your docs. They should be making sure OpenAPI definitions are as complete as you possibly can, with as much metadata as possible, describing the value that it delivers. Loading up OpenAPI definitions into a variety of API design, documentation, discovery, testing, and other tooling to see what it looks like and how it behaves. API providers will find that tags are beginning to be used for much more than just grouping of paths in your API documentation, and it is how gateways are organizing resources, management solutions are defining monetization and billing, and API discovery solutions are using to drive their API search solutions–to just point out a couple of ways in which they are used.

Tag your APIs as part of your OpenAPI definitions! I know that many API providers are still auto-generating from a system, but once they have published the latest copy, make sure you load up in one of the leading API design tools, and give that last little bit of polish. Think of it as that last bit of API editorial workflow that ensures your API definitions speak to the widest possible audience, and are as coherent as it possibly can. Your API definitions tell a story about the resources you are making available, and the tags help provide a much more precise way to programmatically interpret what APIs actually deliver. Without them APIs might not properly show up in search engine and Github searches, or render coherently in other API services and tooling. OpenAPI tags are an essential part of defining and organizing your API resources–give them the attention they deserve.


Using OpenAPI And JSON PATCH To Articulate Changes For Your API Road Map

I’m doing a lot of thinking regarding how JSON PATCH can be applied because of my work with Streamdata.io. When you proxy an existing JSON API with Streamdata.io, after the initial response, every update sent over the wire is articulated as a JSON PATCH update, showing only what has changed. It is an efficient, and useful way to show what has changed with any JSON API response, while being very efficient about what you transmit with each API response, reducing polling, and taking advantage of HTTP caching.

As I’m writing an OpenAPI diff solution, helping understand the differences between OpenAPI definitions I’m importing, and allowing me to understand what has changed over time, I can’t help but think that JSON PATCH would be a great way to articulate change of the surface area of an API over time–that is, if everyone loyally used OpenAPI as their API contract. Providing an OpenAPI diff using JSON PATCH would be a great way to articulate an API road map, and tooling could be developed around it to help API providers publish their road map to their portal, and push out communications with API consumers. Helping everyone understand exactly what is changing in way that could be integrated into existing services, tooling, and systems–making change management a more real time, “pipelinable” (making this word up) affair.

I feel like this could help API providers better understand and articulate what might be breaking changes. There could be tooling and services that help quantify the scope of changes during the road map planning process, and teams could submit OpenAPI definitions before they ever get to work writing code, helping them better see how changes to the API contract will impact the road map. Then the same tooling and services could be used to articulate the road map to consumers, as the road map becomes approved, developed, and ultimately rolled out. With each OpenAPI JSON PATCH moving from road map to change log, keeping all stakeholders up to speed on what is happening across all API resources they depend on–documenting everything along the way.

I am going to think more about this as I evolve my open API lifecycle. How I can iterate a version of my OpenAPI definitions, evaluate the difference, and articulate each update using JSON PATCH. Since more of my API lifecycle is machine readable, I’m guessing I’m going to be able to use this approach beyond just the surface area of my API. I’m going to be able to use it to articulate the changes in my API pricing and plans, as well as licensing, terms of service, and other evolving elements of my operations. It is a concept that will take some serious simmering on the back burners of my platform, but a concept I haven’t been able to shake. So I might as well craft some stories about the approach, and see what I can move forward as I continue to define, design, and iterate on the APIs that drive my platform and API research forward.


Not Liking OpenAPI (fka Swagger) When You Have No Idea What It Does

People love to hate in the API space. Ok, I guess its not exclusive to the API space, but it is a significant aspect of the community. I receive a regular amount of people hating on my work, for no reason at all. I also see people doing it to others in the API space on a regular basis. It always makes me sad to see, and have always worked to try to be as nice as I can to counteract the male negativity and competitive tone that often exists. While I feel bad for the people on the receiving end of all of this, I often times feel bad for the people on the giving end of things, as they are often not the most informed and up to speed folks, who seem to enjoy opening their mouth before they understand what is happening.

One thing I notice regularly, is that these same people like to bash on is OpenAPI (fka Swagger). I regularly see people (still) say how bad of an idea it is, and how it has done nothing for the API space. One common thread I see with these folks, which prevents me from saying anything to them, is that it is clear they really don’t have an informed view of what OpenAPI is. Most people spend a few minutes looking it, maybe read a few blog posts, and then establish their opinions about what it is, or what it isn’t. I regularly find people who are using it as part of their work, and don’t actually understand the scope of the specification and tooling, so when someone is being vocal about it and doesn’t use actually it, it is usually pretty clear pretty quickly how uninformed they are about the specification, tooling, and scope of the community.

I’ve been tracking on it since 2011, and I still have trouble finding OpenAPI specifications, and grasping all of the ways it is being used. When you are a sideline pundit, you are most likely seeing about 1-2% of what OpenAPI does–I am a full time pundit in the game and I see about 60%. The first sign that someone isn’t up to speed is they still call it Swagger. The second sign is they often refer to it as documentation. Thirdly, they often refer to code generation with Swagger as a failure. All three of these views date someone’s understanding to about a 2013 level. If someone is forming assumptions, opinions, and making business decisions about OpenAPI, and being public about it, I’d hate to see what the rest of their technology views look like. In the end, I just don’t even feel like picking on them, challenging them on their assumptions, because their regular world is probably already kicking their ass on a regular basis–no assistance is needed.

I do not feel OpenAPI is the magical solution to fix all the challenges the API space, but it does help reduce friction at almost every stop along the API lifecycle. In my experience, 98% of the people who are hating on it do not have a clue what OpenAPI is, or what it does. I used to challenge folks, and try to educate them. Over the years I’ve converted a lot of folks from skeptics to believers, but in 2018, I think I’m done. If someone is openly criticizing it, I’m guessing it is more about their relationship to tech, and their lack of awareness of delivering APIs at scale, and they probably exist in a pretty entrenched position because of their existing view of the landscape–they don’t need me piling on. However, if people aren’t aware of the landscape, and ask questions about how OpenAPI works, I’m always more than happy to help open their eyes to how the API definition is serving almost every stop along the API lifecycle from design to deprecation, and everything in between.


The Importance Of OpenAPI Tooling

In my world, OpenAPI is always a primary actor, and the tooling and services that put it to work are always secondary. However, I’d say that 80% of the people I talk with are the opposite, putting OpenAPI tooling in a primary role, and the OpenAPI specification in a secondary role. This is the primary reason that many still see Swagger tooling as the value, and haven’t made the switch to the concept of OpenAPI, or understand the separation between the specification and the tooling.

Another way in which you can see the importance of OpenAPI tooling is the slow migration of OpenAPI 2.0 to 3.0 users. Many folks I’ve talked to about OpenAPI 3.0 tell me that they haven’t made the jump because of the lack of tooling available for the specification. This isn’t always about the external services and tooling that supports OpenAPI 3.0, it is also about the internal tooling that supports it. It demonstrates the importance of tooling when it comes to the evolution, and adoption of OpenAPI. It demonstrates the need for the OAI community to keep investing in the development and evangelism of tooling for the latest version.

I am going to work to invest more time into rounding up OpenAPI tooling, and getting to know the developers behind them, as I prepare APIStrat in Nashville, TN. I’m also going to invest in my own migration to OpenAPI 3.0. The reason I haven’t evolved isn’t because of lack tooling, it is because of a lack of time, and the cognitive load involved with thinking new ways. I fully grasp the differences between 2.0 and 3.0, but I just don’t have intuitive knowledge of 3.0 in the way I do for 2.0. I’ve spent hundreds of hours developing around 2.0, and I just don’t have the time in my schedule to make similar investment in 3.0–soon!

If you need to get up to speed on the latest when it comes to OpenAPI 3.0 tooling I recommend checking out OpenAPI.Tools from Matt Trask (@matthewtrask) and Crashy McCiderface (aka Phil Sturgeon) (@philsturgeon). It is the best source of OpenAPI tooling out there right now. If you are still struggling with the migration from 2.0 to 3.0, or would like to see a specific solution developed on top of OpenAPI 3.0, I’d love to hear from you. I’m working to help shape the evolution of the OpenAPI tooling conversation, as well as tell stories about what tools are available, or should be available, and how they are can be put to work on the ground at companies, organizations, institutions, and government agencies.


Opportunity For OpenAPI-Driven Open Source Testing, Performance, Security, And Other Modules

I’ve been on five separate government reflated projects lately where finding modular OpenAPI-driven open source tooling has been a top priority. All of these projects are microservice-focused and OpenAPI-driven, and are investing significant amounts of time looking open source tools that will help with design governance, monitoring, testing, and security, and interact with the Jenkins pipeline. Helping government agencies find success as their API journey picks up speed, and the number of APIs grows exponentially.

Selling to the federal government can be a long journey in itself. They can’t always use the SaaS solutions many of us fire up to get the job done in our startup or enterprise lives. Increasingly government agencies are depending on open source solutions to help them move projects forward. Every agency I’m working with is using OpenAPI (Swagger) to drive their API lifecycle. While not all have gone design (define) first, they are using them as the contract for mocking, documentation, testing, monitoring, and security. The teams I’m working with are investing a lot of energy looking for, vetting, and testing out different open source modules on Github–with varying degrees of success.

Ideally, there was an OpenAPI-driven marketplace, or federated set of marketplaces like OpenAPI.Tools. I’ve had one for a while, but haven’t kept up to date–I will invest some time / resources into it soon. My definition of an OpenAPI tool marketplace would be that it is OpenAPI-driven, and open source. I’m fine with there being other marketplaces of OpenAPI-driven services, but I want a way to get at just the actively maintained open source tools. When it comes to serving government this is an important, and meaningful distinction. I’d also like to encourage many of the project owners to ensure there is CI/CD integration, as well as make sure their projects are actively supported, and they are willing to entertain commercial implementations.

While there wouldn’t always be direct commercial opportunities for open source tooling owners to engage with federal agencies, there would be through contractors and subcontractors. Working for federal agencies is a maze of forms and hoop jumping, but working with contractors can be pretty straightforward if you find the right ones. I don’t think you will get rich developing OpenAPI-driven tooling that serves the API lifecycle, but I think with the right solutions, support, and team behind them, you can make a decent living developing them. Especially as the lifecycle expands, and the number of services being delivered grows, the need for specialized, OpenAPI-driven tools to apply across the API lifecycle is only going to increase. Making it something I’ll be writing more stories about as I hear more stories from the API trenches.

I’m going to try and spend time working with Phil Sturgeon (@philsturgeon) and Matt Trask (@matthewtrask) on API.Tools, as well as give my own toolbox some love. If you have an open source OpenAPI-driven tool you’d like to get some attention feel free to ping me, and make sure its part of API.Tools. Also, if you have a directory, catalog, or marketplace of tools you’d like to showcase, ping me as well, I’m all about supporting diversity of choice in the space. I have multiple federal agencies ear right now when it comes to delivering along the API lifecycle, and I’m happy to point agencies and their contractors to specific tools, if it makes sense. Like I said, there won’t always be direct revenue opportunities, but they are implementations that will undoubtedly lead to commercial opportunities in the form of consulting, advising, and development opportunities with the contractors and subcontractors who are delivering on federal agency contracts.


Looking At 20 Microservices In Concert

I checked out the Github repositories for twenty microservices of one of my clients recently, looking understand what is being accomplished across all these services as they work independently to accomplish a single collective objective. I’m being contracted with to help come in blindly and provide feedback on the design of the APIs being exposed across services, and help provide guidance on their API lifecycle, as well as eventually API governance when things have matured to that level. Right now we are addressing pretty fundamental definition and design issues, but eventually we’ll hopefully graduate to the next level.

A complete and up to date README for each microservice is essential to understanding what is going on with a service, and a robust OpenAPI definition is critical to breaking down the details of what each API delivers. When you aren’t part of each service’s development team it can be difficult to understand what each service does, but with an up to date README and OpenAPI, you can get up to speed pretty quickly. If an service is well documented via its README, and the API is well designed, and the surface area is reflected in it’s OpenAPI, you can go from not knowing what a service does to, understanding its value within hopefully minutes, not hours.

When each service possesses an OpenAPI it becomes possible to evaluate what they deliver at scale. You can take all APIs, their paths, headers, parameters, and schema and out them in different ways so that you can begin to paint a picture of what they deliver in aggregate. Bringing all the disparate services back together to perform together in a sort of monolith concert, while still acknowledging they all do their own thing independently. Allowing us to look at how many different service can be used in concert to deliver a single application, or potentially a variety of application instances. Thinking critically about each independent service, but more importantly how they all work together.

I feel like many groups are still struggling with decomposing their monolithic systems into separate services, and while some are doing so in a domain-driven way, few are beginning to invest in understanding how they move forward with services in concert to deliver on application needs. Many of the groups I’m working with are so focused on decomposing and tearing down, they aren’t thinking too critically about how they will make all of this begin work together again. I see monolith systems working like a massive church organ which take a lot of maintenance, and require a single (or handful) of knowledgeable operators to play. Where microservices are much more like an orchestra, where every individual player has a role, but they play in concert, directed by a conductor. I feel like most groups I’m talking with are just beginning the process of hiring a conductor, and have a bunch of musicians roaming around–not quite ready to play any significant productions quite yet.


API Discovery is for Internal or External Services

The topic of API discovery has been picking up momentum in 2018. It is something I’ve worked on for years, but with the number of microservices emerging out there, it is something I’m seeing become a concern amongst providers. It is also something I’m seeing more potential vendor chatter, looking to provide more services and tooling to help alleviate API discovery pain. Even with all this movement, there is still a lot of education and discussion to occur on the subject to help bring people up to speed on what is API discovery.

The most common view of what is API discovery, is when you need to find an API for developing an application. You have a need for a resource in your application, and you need to look across your internal and partner resources to find what you are looking for. Beyond that, you will need to search for publicly available API resources, using Google, Github, ProgrammableWeb, and other common ways to find popular APIs. This is definitely the most prominent perspective when it comes to API discovery, but it isn’t the only dimension of this problem. There are several dimensions to this stop along the API lifecycle, that I’d like to flesh out further, so that I can better articulate across conversations I am having.

Another area that gets lumped in with API discovery is the concept of service discovery, or how your APIs will find their backend services that they use to make the magic happen. Service discovery focuses on the initial discovery, connectivity, routing, and circuit breaker patterns involved with making sure an API is able to communicate with any service it depends on. With the growth of microservices there are a number of solutions like Consul that have emerged, and cloud providers like AWS are evolving their own service discovery mechanisms. Providing one dimension to the API discovery conversation, but different from, and often confused with front-end API discovery and how developers and applications find services.

One of the least discussed areas of API discovery, but is one that is picking up momentum, is finding APIs when you are developing APIs, to make sure you aren’t building something that has already been developed. I come across many organizations who have duplicate and overlapping APIs that do similar things due to lack of communication and a central directory of APIs. I’m getting asked by more groups regarding how they can be conducting API discovery by default across organizations, sniffing out APIs from log files, on Github, and other channels in use by existing development teams. Many groups just haven’t been good at documenting and communicating around what has been developed, as well as beginning new projects without seeing what already exists–something that will only become a great problem as the number of microservices grows.

The other dimension of API discovery I’m seeing emerge is discovery in the service of governance. Understand what APIs exist across teams so that definitions, schema, and other elements can be aggregated, measured, secured, and governed. EVERY organization I work with is unaware of all the data sources, web services, and APIs that exist across teams. Few want to admit it, but it is a reality. The reality is that you can’t govern or secure what you don’t know you have. Things get developed so rapidly, and baked into web, mobile, desktop, network, and device applications so regularly, that you just can’t see everything. Before companies, organizations, institutions, and government agencies are going to be able to govern anything, they are going to have begin addressing the API discovery problem that exists across their teams.

API discovery is a discipline that is well over a decade old. It is one I’ve been actively working on for over 5 years. It is something that is only now getting the discussion it needs, because it is a growing concern. It will be come a major concern with each passing day of the microservice evolution. People are jumping on the microservices bandwagon without any coherent way to organize schema, vocabulary, or API definitions. Let alone any strategy for indexing, cataloging, sharing, communicating, and registering services. I’m continuing my work on APIs.json, and the API Stack, as well as pushing forward my usage of OpenAPI, Postman, and AsyncAPI, which all contribute to API discovery. I’m going to continue thinking about how we can publish open source directories, catalogs, and search engines, and even some automated scanning of logs and other ways to conduct discovery in the background. Eventually, we will begin to find more solutions that work–it will just take time.


Making the OpenAPI Contract Friendlier For Developers and Business Stakeholders

I was in a conference session about an API design tool today, and someone asked if you could get at the OpenAPI definition behind the solution. They said yes, but quickly also said that the definition is boring and that you don’t want to be in there, you want to be in the interface. I get that service providers want you to focus on their interface, but we shouldn’t be burying, or abstracting away the API contract for APIs, we should always be educating people about it, an bringing it front and center in any service, tooling, or conversation.

Technology folks burying or devaluing the OpenAPI definition with business users is common, but I also see technology folks doing it to each other. Reducing OpenAPI to be just another machine readable artifact alongside other components of delivering API infrastructure today. I think this begins with people not understanding what OpenAPI is, but I think it is sustained by people’s view of what is technological magic and should remain in the hands of the wizards, and what should be accessible to a wider audience. If you limit who has access and knowledge, you can usually maintain a higher level of control, so they use your interface in the case of a vendor, or they come to you develop and build an API in the case of a developer.

There is nothing in a YAML OpenAPI definition that business users won’t be able to understand. OpenAPIs aren’t anymore boring than a Word document or Spreadsheet. If you are a stakeholder in the service, you should be able to read, understand, and engage with the OpenAPI contract. If we teach people to be afraid of the OpenAPI definitions we are repeating the past, and maintaining the canyon that can exist between business and IT/Developer groups. If you are in the business of burying the OpenAPI definition, I’m guessing you don’t understand the portable API lifecycle potential of this API contract, and simply see it as a config, documentation, or other technical artifiact. Or you are just in the business of maintaining control and power by being the gatekeeper for the API contract, similar to how we see database people defend their domain.

Please do not devalue or hide away the OpenAPI contract. It isn’t your secret sauce. It isn’t boring. It isn’t too technical. It is the contract for how a service will work, that will speak to business and technical groups. It is the contract that all the services and tools you will use along the API lifecycle will understand. It is fine to have the OpenAPI right behind the scenes, but always provide a button, link, or other way to quickly see the latest version, and definitely do not scare people away or devalue it when you are talking. If you are doing APIs, you should be encouraging, and investing in everyone being able to have a conversation around the API contract behind any service you are putting forward.


Constraints Introduced By Supporting Standardized API Definitions and Schema

I’ve had a few API groups contact me lately regarding the challenges they are facing when it comes to supporting organizational, or industry-wide API definitions and schema. They were eager to support common definitions and schema that have been standardized, but were getting frustrated by not being able to do everything they wanted, and having to live within the constraints introduced by the standardized definitions. Which is something that doesn’t get much discussion by those of use who are advocating for standardization of APIs and schema.

Web APIs Come With Their Own Constraints
We all want more developers to use our APIs. However, with more usage, comes more responsibility. Also, to get more usage our APIs need to speak to a wider audience, something that common API definitions and schema will help with. This is why web APIs are working, because they speak to a wider audience, however with this architectural decision we are making some tradeoffs, and accepting some constraints in how we do our APIs. This is why REST is just one tool in our toolbox, so we can use the right tool, establish the right set of constraints, to allow our APIs to be successful. The wider our API toolbox, the wider the number options we will have available when it comes to how we design our APIs, and what schema we can employ

Allowing For Content Negotiation By Consumers
One way I’ve encouraged folks to help alleviate some of the pain around the adoption of common API definitions and schema is to provide content negotiation to consumers, allowing them to obtain the response they are looking for. If people want the standardized approaches they can choose those, and if they want something more precise, or custom they can choose that. This also allows API providers to work around the API standards that have become bottlenecks, while still supporting them where they matter. Having the best of both worlds, where you are supporting the common approach, but still able to do what you want when you feel it is important. Allowing for experimentation as well as standardization using the same APIs.

Participate In Standards Body and Process
Another way to help move things forward is to participate in the standards body that is moving an API definition or schema forward. Make sure you have a seat at the table so that you can present your case for where the problems are, and how to improve on the design, definition, and schema being evolved. Taking a lead in creating the world you want to see when it comes to API and schema standards, and not just sitting back being frustrated because it doesn’t do what you want it to do. Having a role in the standards body, and actively participating in the process isn’t easy, and it can be time consuming, but it can be worth it down the road and helping you better achieve your goals when it comes to your APIs operating as you aspect, as well as the wider community and industry you are serving.

Delivering APIs at scale won’t be easy. To reach a wide audience with your API it helps to be speaking a common vocabulary. This doesn’t always allow you to move as fast as you’d like, and do everything exactly as you envision. You will have to compromise. Operate within constraints. However, it can be worth it. Not just for your organization, but for the overall health of your community, and the industry you operate in. You never know, with a little patience, collaboration, and communication, you might learn even new approaches to defining your APIs and schema that you didn’t think about in isolation. Also, experimentation with new patterns will still be important, even while working to standardize things. In the end, a balance between standardized and custom will make the most sense, and hopefully alleviate your frustrations in moving things forward.


Turning The Stack Exchange Questions API Into 25 Separate Tech Topic Streaming APIs

I’m turning different APIs into topical streams using Streamdata.io. I have been profiling hundreds of different APIs as part of my work to build out the Streamdata.io API Gallery, and as I’m creating OpenAPI definitions for each API, I’m realizing the potential for event and topical driven streams across many existing web APIs. One thing I am doing after profiling each API is that I benchmark them to see how often the data changes, applying what we are calling StreamRank to each API path. Then I try to make sure all the parameters, and even enum values for each parameter are represented for each API definition, helping me see the ENTIRE surface area of an API. Which is something that really illuminates the possibilities surrounding each API.

After profiling the Stack Exchange Questions API, I began to see how much functionality and data is buried within a single API endpoint, and was something I wanted to expose and make much easier to access. Taking a single OpenAPI definition for the Stack Exchange Questions API:

Then exploding it into 25 separate tech topic streaming APIs. Taking the top 25 enum value for the tags parameter for the Stack Overflow site, and exploding into 25 separate streaming API resources. To do this, I’m taking each OpenAPI definition, and generating an AsyncAPI definition to represent each possible stream:

I’m not 100% sure I’m properly generating the AsyncAPI currently, as I’m still learning about how to use the topics and streams collections properly. However, the OpenAPI definition above is meant to represent the web API resource, and the AsyncAPI definition is meant to represent the streaming edition of the same resource. Something that can be replicated for any tag, or any site that is available via the Stack Exchange API. Turning the existing Stack Exchange API into streaming topic APIs, that people can subscribe to only the topics they are interested in receiving updates.

At this point I’m just experimenting with what is possible with OpenAPI and AsyncAPI specifications, and understanding what I can do with some of the existing APIs I am already using each day. I’m going to try and turn this into a prototype, delivering streaming APIs for all 25 of the top Stack Overflow tags. To demonstrate what is possible on Stack Exchange, but also to establish a proof of concept that I can apply to other APIs like Reddit, Github, and others. Then eventually automating the creation of streaming topic APIs using the OpenAPI definitions for common APIs, and the Streamdata.io service.


Proprietary Views Of Your Taxonomy

I’ve been investing a lot more energy into open data and APIs involved with city government, something I’ve dabbled in as long as I’ve been doing API Evangelist, but is something I’ve ratcheted up pretty significantly over the last couple of years. As part of this work, I’ve increasingly come across some pretty proprietary stances when it comes to data that is part of city operations–this stuff has been seen as gold, long before Silicon Valley came along, with long lines of folks looking to lock it up and control it.

Helping civic data stakeholders separate the licensing layers around their open data and APIs is something I do as the API Evangelist. Another layer I will be adding to this discussion is around taxonomy. How city data folks are categorizing, organizing, and classifying the valuable data needed to make our cities work. I’ve been coming across more vendors in the city open data world who feel their taxonomy is their secret sauce and falls under intellectual property protection. I don’t have any wisdom regarding why this is a bad idea, but I will keep writing about as part of my wider API licensing work to help flesh out my ideas, and create a more coherent and precise argument.

I understand that some companies put a lot of work into taxonomies, and the description behind how they organize things, but like API definitions and schema, these are aspects of your open data and API operations you want to be a widely understood, shared, and reusable knowledge within your systems, as well as the 3rd party integration of your partners and customers. Making your taxonomy proprietary isn’t going to help your brand, or get you ahead in the game. I recommend focusing on other aspects of the value you bring to the table and keep your taxonomy as openly licensed as you possibly can, encouraging adoption by others. I’ll work on a more robust argument as I work through a variety of projects that will potentially be hurt by the proprietary views on taxonomy I’m seeing.


KDL: A Graphical Notation for Kubernetes API Objects

I am learning about the Kubernetes Deployment Language (KDL) today, trying to understand their approach to defining their notion of Kubernetes API objects. It feels like an interesting evolution in how we define our infrastructure, and begin standardizing the API layer for it so that we can orchestrate as we need.

They are standardizing the Kubernetes API objects into the following buckets:

Cluster - The orchestration level of things. Compute - The individual compute level. Networking - The networking layer of it all. Storage - Storage behind our APIs.

This has elements of my API lifecycle research, as well as a containerized, clustered, BaaS 2.0 in my view. Standardizing how we define and describe the essential layers of our API and application infrastructure. I could also see standardizing the testing, monitoring, performance, security, and other critical aspects of doing this at scale.

I’m also fascinated at how fast YAML has become the default orchestration template language for folks in the cloud containerization space. I’ll add KDL to my API definition and container research and keep an eye on what they are up to, and keep an eye out for other approaches to standardizing the API layer for deploying, managing, and scaling our increasingly containerized API infrastructure.


Examples Of The OpenAPI Specification Used For Government APIs

I was answering some questions for my partners over at DreamFactory when it comes to APIs in government, and one of the questions they asked was about some examples of the OpenAPI specification being used in government. To help out, I started going through my list of government API looking for any examples in the wild–here is what I found:

I am sure there are more OpenAPI in use across government, but this is what I could find in a five-minute search of my API database. It provides us with seven quality examples of OpenAPI being used for documenting government APIs. I don’t see the OpenAPI used for much beyond documentation, but it is still a good start.

If you know of any government APIs that use OpenAPI feel free to let me know. I’d love to keep adding examples to my research so I can pull up quickly when I am asked questions like this in the future, and be able to highlight best practices for API operations in city, county, state, and federal levels of government.


Patent US 8997069: API Descriptions

There are so many API patents out there, I’m going to have to start posting one a day just to keep up. Lucky for you I begin to get really depressed by all the API patents I lose interest in reading them and begin to work harder looking for positive examples of API in the world, but until then here is today’s depressing as fuck API patent.

Title: API descriptions Number: US 8997069 Owner: Microsoft Technology Licensing, LLC Abstract: API description techniques are described for consumption by dynamically-typed languages. In one or more implementations, machine-readable data is parsed to locate descriptions of one or more application programming interfaces (APIs). The descriptions of the one or more application programming interfaces are projected into an alternate form that is different than a form of the machine-readable data.

I don’t mean to be a complete dick here, but why would you think this is a good idea? I get that companies want their employees to develop a patent portfolio, but this one is patenting an essential ingredient that makes APIs work. If you enforce this patent it will be worthless because this whole API thing won’t work, and if you don’t enforce it, it will be worthless because it does nothing–money well spent on the filing fee.

I just need to file my patent on patenting APIs and end all of this nonsense. I know y’all think I’m crazy for my beliefs that APIs shouldn’t be patented, but every time I dive into my patent research I can’t help but think y’all are the crazy ones, and I’m actually sane. I just do not understand how this patent is going to help anyone and represents any of the value that APIs and even a patent can bring to the table.


APIs Are How Our Digital Selves are Learning To Speak With Each Other

I know this will sound funny to many folks, but when I see APIs, I see language and communication, and humans learning to speak with each other in this new digital world we are creating for ourselves. My friend Erik Wilde (@dret) tweeted a reminder for me that APIs are indeed a language.

Every second on our laptops and mobile phone we are communicating with many different companies and individuals. With each wall post, Tweet, photo push, or video stream we are communicating with our friends, family, and the public. Each of these interactions is being defined and facilitated using an API. An API call just for saying something in text, in an image, or video. API is the digital language we use to communicate online and via our mobile devices.

Uber geeks like me spend their days trying to map out and understand these direct interactions, as well as the growing number of indirect interactions. For every direct communication, there are usually numerous other indirect communications with advertisers, platform providers, or maybe even law enforcement, researchers, or anyone else with access to the communication channels. We aren’t just learning to directly communicate, we are also being conditioned to participate indirectly in conversations we can’t see–unless you are tuned into the bigger picture of the API economy.

When we post that photo, companies are whispering about what is in the photo, where it was taken, and what meaning it has. When we share that news link of Facebook, companies have a discussion about the truthfulness and impact of the link, maybe the psychological profile behind the link and where we fit into their psychological profile database. In some scenarios, they are talking directly about us personally like we are sitting in the room, other times they are talking about us like we are just a number in a larger demographic pool.

In alignment with the real world, the majority of these conversations being held between men, behind closed doors. Publicly the conversations are usually directed by people with a bullhorn, talking over others, as well as whispering behind, and around people while they completely unaware that these conversations about them are even occurring. The average person is completely unaware these conversations are happening. They can’t hear the whispering, or just do not speak the language that is being used around them, about them, each moment of each day.

Those of us in the know are scrambling to understand, control, and direct the conversations that are occuring. There is a lot of money to be made when you are part of these conversations. Or at least have a whole bunch of people on your platform to have a conversation about, or around. People don’t realize that for every direct conversation you have online, there are probably 20 conversations going on about this conversation. What will they buy next? Who do they know? What is in that photo they just shared? Is this related post interesting to them? API-driven echoes of conversation upon conversations into infinity.

Sometimes I feel like Dr. Xavier from the X-men in that vault room connected to the machine when I am on the Internet studying APIs. I’m seeing millions of conversations going on–it is deafening. I don’t just see or hear the direct conversations, I hear the deafening sounds of advertisers, hackers, researchers, police, government, and everyone having a conversation around us. Many folks feel like the average person shouldn’t be included in the conversation–they do not have the interest or awareness to even engage. To me, it just feels like a new secretive world augmenting our physical worlds, where our digital selves are learning to speak with each other. What troubles me though, is that not everyone is actually engaged in the conversations they are included in, and are often asleep or sedated while their personal digital self is being manipulated, exploited, and p0wn3d.


The Github Repo Stripe Uses To Manage Their OpenAPI

I’m beating a drum every time I find a company managing their OpenAPI on Github, like we would the other code elements of our API operations. Today’s drumbeat comes from my friend Nicolas Grenié (@picsoung), who posted Stripe’s Github repository for their OpenAPI in our Slack channel for the super cool API Evangelists in the sector. ;-)

Along with the New York Times, Box, and other API providers, Stripe has a dedicated Github repo for managing their OpenAPI definition. This opens up the Stripe API for easily loading in client tools like Restlet Client, and Postman, as we as generating code samples and SDKs using services like APIMATIC. Most importantly, it allows for developers to easily understand the surface area of the Stripe API, in a way that is machine-readable, and portable.

It makes me happy to see leading API providers manage their own OpenAPI using Github like this. The API sector will be able to reach new heights when every single API provider manages their API definitions like this. I know, I know hypermedia folks–everyone should just do hypermedia. Yes, they should. However, we need some initial steps to occur before that is possible, and API providers being able to effectively communicate their API surface area to API consumers in a way that scales and can be used across the API lifecycle is an important part of this evolution. With each OpenAPI I find like this, I get more optimistic that we are getting closer to the future that RESTafarians and hypermedia folks envision–providers are doing the hard work of thinking about the definitions used in their APIs in the context of the entire API lifecycle, and the API consumers who exist along the way.


Managing Your Postman Collection Using Github

I have been encouraging API providers to publish and manage their API definitions using Github similar to how you’d manage any code. Companies like Box and NY Times are publishing their OpenAPI definitions to a single repository, allowing partners and API consumers to pull the latest version of the API definition and use throughout the API lifecycle.

I stumbled across another example of managing your API definitions using Github, but this time it is the management of your Postman Collections in a Github repo from API management provider Apigee (now Google). The Postman Collection provides a complete description of the Apigee API management surface area, allowing API providers to easily automate or orchestrate their API operations using Apigee.

The Github repository providers a complete Postman Collection, along with instructions on how to load, as well as a Run in Postman button allowing any consumer to instantly load the entire surface area of the Apigee API management solution into their Postman Client. I am a big fan of managing your Postman Collections, as well as OpenAPI definitions in this way, managing the definitions for your API similar to how you manage your code, but also making available for forking, checking out, and integration of these machine-readable definitions anywhere across the API lifecycle.


Adding Three APIMATIC OpenAPI Extensions To The OpenAPI Toolbox

I’ve added three OpenAPI extensions from APIMATIC to my OpenAPI Toolbox, adding to the number of extensions I’m tracking on that service providers and tooling developers are using as part of their API solutions. APIMATIC provides SDK code generation services, so their OpenAPI extensions are all about customizing how you deploy code as part of the integration process.

These are the three OpenAPI extensions I am adding from them:

  • x-codegen-settings - These settings are globally applicable to all operations and schema definitions.
  • x-operation-settings - These settings can be specified inside an "operation" object.
  • x-additional-headers - These headers are in addition to any headers required for authentication or defined as parameters.

If you have ever used APIMATIC you know that you can do a lot more than just “SDK generation”, which often has a bad reputation. APIMATIC provides some interesting ways you can use OpenAPI to dial in your SDK, script, and code generation as part of any continuous integration lifecycle.

Providing another example of how you don’t have to live within the constraints of the current OpenAPI spec. Anyone can augment, and extend the current specification to meet your unique needs. Then who knows, maybe it will become useful enough, and something that might eventually be added to the core specification. Which is part of the reason I’m aggregating these specifications, and including them in the OpenAPI Toolbox.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.