RSS

API Definitions News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is defining not just their APIs, but their schema, and other moving parts of their API operations.

The Importance Of OpenAPI Tooling

In my world, OpenAPI is always a primary actor, and the tooling and services that put it to work are always secondary. However, I’d say that 80% of the people I talk with are the opposite, putting OpenAPI tooling in a primary role, and the OpenAPI specification in a secondary role. This is the primary reason that many still see Swagger tooling as the value, and haven’t made the switch to the concept of OpenAPI, or understand the separation between the specification and the tooling.

Another way in which you can see the importance of OpenAPI tooling is the slow migration of OpenAPI 2.0 to 3.0 users. Many folks I’ve talked to about OpenAPI 3.0 tell me that they haven’t made the jump because of the lack of tooling available for the specification. This isn’t always about the external services and tooling that supports OpenAPI 3.0, it is also about the internal tooling that supports it. It demonstrates the importance of tooling when it comes to the evolution, and adoption of OpenAPI. It demonstrates the need for the OAI community to keep investing in the development and evangelism of tooling for the latest version.

I am going to work to invest more time into rounding up OpenAPI tooling, and getting to know the developers behind them, as I prepare APIStrat in Nashville, TN. I’m also going to invest in my own migration to OpenAPI 3.0. The reason I haven’t evolved isn’t because of lack tooling, it is because of a lack of time, and the cognitive load involved with thinking new ways. I fully grasp the differences between 2.0 and 3.0, but I just don’t have intuitive knowledge of 3.0 in the way I do for 2.0. I’ve spent hundreds of hours developing around 2.0, and I just don’t have the time in my schedule to make similar investment in 3.0–soon!

If you need to get up to speed on the latest when it comes to OpenAPI 3.0 tooling I recommend checking out OpenAPI.Tools from Matt Trask (@matthewtrask) and Crashy McCiderface (aka Phil Sturgeon) (@philsturgeon). It is the best source of OpenAPI tooling out there right now. If you are still struggling with the migration from 2.0 to 3.0, or would like to see a specific solution developed on top of OpenAPI 3.0, I’d love to hear from you. I’m working to help shape the evolution of the OpenAPI tooling conversation, as well as tell stories about what tools are available, or should be available, and how they are can be put to work on the ground at companies, organizations, institutions, and government agencies.


Opportunity For OpenAPI-Driven Open Source Testing, Performance, Security, And Other Modules

I’ve been on five separate government reflated projects lately where finding modular OpenAPI-driven open source tooling has been a top priority. All of these projects are microservice-focused and OpenAPI-driven, and are investing significant amounts of time looking open source tools that will help with design governance, monitoring, testing, and security, and interact with the Jenkins pipeline. Helping government agencies find success as their API journey picks up speed, and the number of APIs grows exponentially.

Selling to the federal government can be a long journey in itself. They can’t always use the SaaS solutions many of us fire up to get the job done in our startup or enterprise lives. Increasingly government agencies are depending on open source solutions to help them move projects forward. Every agency I’m working with is using OpenAPI (Swagger) to drive their API lifecycle. While not all have gone design (define) first, they are using them as the contract for mocking, documentation, testing, monitoring, and security. The teams I’m working with are investing a lot of energy looking for, vetting, and testing out different open source modules on Github–with varying degrees of success.

Ideally, there was an OpenAPI-driven marketplace, or federated set of marketplaces like OpenAPI.Tools. I’ve had one for a while, but haven’t kept up to date–I will invest some time / resources into it soon. My definition of an OpenAPI tool marketplace would be that it is OpenAPI-driven, and open source. I’m fine with there being other marketplaces of OpenAPI-driven services, but I want a way to get at just the actively maintained open source tools. When it comes to serving government this is an important, and meaningful distinction. I’d also like to encourage many of the project owners to ensure there is CI/CD integration, as well as make sure their projects are actively supported, and they are willing to entertain commercial implementations.

While there wouldn’t always be direct commercial opportunities for open source tooling owners to engage with federal agencies, there would be through contractors and subcontractors. Working for federal agencies is a maze of forms and hoop jumping, but working with contractors can be pretty straightforward if you find the right ones. I don’t think you will get rich developing OpenAPI-driven tooling that serves the API lifecycle, but I think with the right solutions, support, and team behind them, you can make a decent living developing them. Especially as the lifecycle expands, and the number of services being delivered grows, the need for specialized, OpenAPI-driven tools to apply across the API lifecycle is only going to increase. Making it something I’ll be writing more stories about as I hear more stories from the API trenches.

I’m going to try and spend time working with Phil Sturgeon (@philsturgeon) and Matt Trask (@matthewtrask) on API.Tools, as well as give my own toolbox some love. If you have an open source OpenAPI-driven tool you’d like to get some attention feel free to ping me, and make sure its part of API.Tools. Also, if you have a directory, catalog, or marketplace of tools you’d like to showcase, ping me as well, I’m all about supporting diversity of choice in the space. I have multiple federal agencies ear right now when it comes to delivering along the API lifecycle, and I’m happy to point agencies and their contractors to specific tools, if it makes sense. Like I said, there won’t always be direct revenue opportunities, but they are implementations that will undoubtedly lead to commercial opportunities in the form of consulting, advising, and development opportunities with the contractors and subcontractors who are delivering on federal agency contracts.


Looking At 20 Microservices In Concert

I checked out the Github repositories for twenty microservices of one of my clients recently, looking understand what is being accomplished across all these services as they work independently to accomplish a single collective objective. I’m being contracted with to help come in blindly and provide feedback on the design of the APIs being exposed across services, and help provide guidance on their API lifecycle, as well as eventually API governance when things have matured to that level. Right now we are addressing pretty fundamental definition and design issues, but eventually we’ll hopefully graduate to the next level.

A complete and up to date README for each microservice is essential to understanding what is going on with a service, and a robust OpenAPI definition is critical to breaking down the details of what each API delivers. When you aren’t part of each service’s development team it can be difficult to understand what each service does, but with an up to date README and OpenAPI, you can get up to speed pretty quickly. If an service is well documented via its README, and the API is well designed, and the surface area is reflected in it’s OpenAPI, you can go from not knowing what a service does to, understanding its value within hopefully minutes, not hours.

When each service possesses an OpenAPI it becomes possible to evaluate what they deliver at scale. You can take all APIs, their paths, headers, parameters, and schema and out them in different ways so that you can begin to paint a picture of what they deliver in aggregate. Bringing all the disparate services back together to perform together in a sort of monolith concert, while still acknowledging they all do their own thing independently. Allowing us to look at how many different service can be used in concert to deliver a single application, or potentially a variety of application instances. Thinking critically about each independent service, but more importantly how they all work together.

I feel like many groups are still struggling with decomposing their monolithic systems into separate services, and while some are doing so in a domain-driven way, few are beginning to invest in understanding how they move forward with services in concert to deliver on application needs. Many of the groups I’m working with are so focused on decomposing and tearing down, they aren’t thinking too critically about how they will make all of this begin work together again. I see monolith systems working like a massive church organ which take a lot of maintenance, and require a single (or handful) of knowledgeable operators to play. Where microservices are much more like an orchestra, where every individual player has a role, but they play in concert, directed by a conductor. I feel like most groups I’m talking with are just beginning the process of hiring a conductor, and have a bunch of musicians roaming around–not quite ready to play any significant productions quite yet.


API Discovery is for Internal or External Services

The topic of API discovery has been picking up momentum in 2018. It is something I’ve worked on for years, but with the number of microservices emerging out there, it is something I’m seeing become a concern amongst providers. It is also something I’m seeing more potential vendor chatter, looking to provide more services and tooling to help alleviate API discovery pain. Even with all this movement, there is still a lot of education and discussion to occur on the subject to help bring people up to speed on what is API discovery.

The most common view of what is API discovery, is when you need to find an API for developing an application. You have a need for a resource in your application, and you need to look across your internal and partner resources to find what you are looking for. Beyond that, you will need to search for publicly available API resources, using Google, Github, ProgrammableWeb, and other common ways to find popular APIs. This is definitely the most prominent perspective when it comes to API discovery, but it isn’t the only dimension of this problem. There are several dimensions to this stop along the API lifecycle, that I’d like to flesh out further, so that I can better articulate across conversations I am having.

Another area that gets lumped in with API discovery is the concept of service discovery, or how your APIs will find their backend services that they use to make the magic happen. Service discovery focuses on the initial discovery, connectivity, routing, and circuit breaker patterns involved with making sure an API is able to communicate with any service it depends on. With the growth of microservices there are a number of solutions like Consul that have emerged, and cloud providers like AWS are evolving their own service discovery mechanisms. Providing one dimension to the API discovery conversation, but different from, and often confused with front-end API discovery and how developers and applications find services.

One of the least discussed areas of API discovery, but is one that is picking up momentum, is finding APIs when you are developing APIs, to make sure you aren’t building something that has already been developed. I come across many organizations who have duplicate and overlapping APIs that do similar things due to lack of communication and a central directory of APIs. I’m getting asked by more groups regarding how they can be conducting API discovery by default across organizations, sniffing out APIs from log files, on Github, and other channels in use by existing development teams. Many groups just haven’t been good at documenting and communicating around what has been developed, as well as beginning new projects without seeing what already exists–something that will only become a great problem as the number of microservices grows.

The other dimension of API discovery I’m seeing emerge is discovery in the service of governance. Understand what APIs exist across teams so that definitions, schema, and other elements can be aggregated, measured, secured, and governed. EVERY organization I work with is unaware of all the data sources, web services, and APIs that exist across teams. Few want to admit it, but it is a reality. The reality is that you can’t govern or secure what you don’t know you have. Things get developed so rapidly, and baked into web, mobile, desktop, network, and device applications so regularly, that you just can’t see everything. Before companies, organizations, institutions, and government agencies are going to be able to govern anything, they are going to have begin addressing the API discovery problem that exists across their teams.

API discovery is a discipline that is well over a decade old. It is one I’ve been actively working on for over 5 years. It is something that is only now getting the discussion it needs, because it is a growing concern. It will be come a major concern with each passing day of the microservice evolution. People are jumping on the microservices bandwagon without any coherent way to organize schema, vocabulary, or API definitions. Let alone any strategy for indexing, cataloging, sharing, communicating, and registering services. I’m continuing my work on APIs.json, and the API Stack, as well as pushing forward my usage of OpenAPI, Postman, and AsyncAPI, which all contribute to API discovery. I’m going to continue thinking about how we can publish open source directories, catalogs, and search engines, and even some automated scanning of logs and other ways to conduct discovery in the background. Eventually, we will begin to find more solutions that work–it will just take time.


Making the OpenAPI Contract Friendlier For Developers and Business Stakeholders

I was in a conference session about an API design tool today, and someone asked if you could get at the OpenAPI definition behind the solution. They said yes, but quickly also said that the definition is boring and that you don’t want to be in there, you want to be in the interface. I get that service providers want you to focus on their interface, but we shouldn’t be burying, or abstracting away the API contract for APIs, we should always be educating people about it, an bringing it front and center in any service, tooling, or conversation.

Technology folks burying or devaluing the OpenAPI definition with business users is common, but I also see technology folks doing it to each other. Reducing OpenAPI to be just another machine readable artifact alongside other components of delivering API infrastructure today. I think this begins with people not understanding what OpenAPI is, but I think it is sustained by people’s view of what is technological magic and should remain in the hands of the wizards, and what should be accessible to a wider audience. If you limit who has access and knowledge, you can usually maintain a higher level of control, so they use your interface in the case of a vendor, or they come to you develop and build an API in the case of a developer.

There is nothing in a YAML OpenAPI definition that business users won’t be able to understand. OpenAPIs aren’t anymore boring than a Word document or Spreadsheet. If you are a stakeholder in the service, you should be able to read, understand, and engage with the OpenAPI contract. If we teach people to be afraid of the OpenAPI definitions we are repeating the past, and maintaining the canyon that can exist between business and IT/Developer groups. If you are in the business of burying the OpenAPI definition, I’m guessing you don’t understand the portable API lifecycle potential of this API contract, and simply see it as a config, documentation, or other technical artifiact. Or you are just in the business of maintaining control and power by being the gatekeeper for the API contract, similar to how we see database people defend their domain.

Please do not devalue or hide away the OpenAPI contract. It isn’t your secret sauce. It isn’t boring. It isn’t too technical. It is the contract for how a service will work, that will speak to business and technical groups. It is the contract that all the services and tools you will use along the API lifecycle will understand. It is fine to have the OpenAPI right behind the scenes, but always provide a button, link, or other way to quickly see the latest version, and definitely do not scare people away or devalue it when you are talking. If you are doing APIs, you should be encouraging, and investing in everyone being able to have a conversation around the API contract behind any service you are putting forward.


Constraints Introduced By Supporting Standardized API Definitions and Schema

I’ve had a few API groups contact me lately regarding the challenges they are facing when it comes to supporting organizational, or industry-wide API definitions and schema. They were eager to support common definitions and schema that have been standardized, but were getting frustrated by not being able to do everything they wanted, and having to live within the constraints introduced by the standardized definitions. Which is something that doesn’t get much discussion by those of use who are advocating for standardization of APIs and schema.

Web APIs Come With Their Own Constraints
We all want more developers to use our APIs. However, with more usage, comes more responsibility. Also, to get more usage our APIs need to speak to a wider audience, something that common API definitions and schema will help with. This is why web APIs are working, because they speak to a wider audience, however with this architectural decision we are making some tradeoffs, and accepting some constraints in how we do our APIs. This is why REST is just one tool in our toolbox, so we can use the right tool, establish the right set of constraints, to allow our APIs to be successful. The wider our API toolbox, the wider the number options we will have available when it comes to how we design our APIs, and what schema we can employ

Allowing For Content Negotiation By Consumers
One way I’ve encouraged folks to help alleviate some of the pain around the adoption of common API definitions and schema is to provide content negotiation to consumers, allowing them to obtain the response they are looking for. If people want the standardized approaches they can choose those, and if they want something more precise, or custom they can choose that. This also allows API providers to work around the API standards that have become bottlenecks, while still supporting them where they matter. Having the best of both worlds, where you are supporting the common approach, but still able to do what you want when you feel it is important. Allowing for experimentation as well as standardization using the same APIs.

Participate In Standards Body and Process
Another way to help move things forward is to participate in the standards body that is moving an API definition or schema forward. Make sure you have a seat at the table so that you can present your case for where the problems are, and how to improve on the design, definition, and schema being evolved. Taking a lead in creating the world you want to see when it comes to API and schema standards, and not just sitting back being frustrated because it doesn’t do what you want it to do. Having a role in the standards body, and actively participating in the process isn’t easy, and it can be time consuming, but it can be worth it down the road and helping you better achieve your goals when it comes to your APIs operating as you aspect, as well as the wider community and industry you are serving.

Delivering APIs at scale won’t be easy. To reach a wide audience with your API it helps to be speaking a common vocabulary. This doesn’t always allow you to move as fast as you’d like, and do everything exactly as you envision. You will have to compromise. Operate within constraints. However, it can be worth it. Not just for your organization, but for the overall health of your community, and the industry you operate in. You never know, with a little patience, collaboration, and communication, you might learn even new approaches to defining your APIs and schema that you didn’t think about in isolation. Also, experimentation with new patterns will still be important, even while working to standardize things. In the end, a balance between standardized and custom will make the most sense, and hopefully alleviate your frustrations in moving things forward.


Turning The Stack Exchange Questions API Into 25 Separate Tech Topic Streaming APIs

I’m turning different APIs into topical streams using Streamdata.io. I have been profiling hundreds of different APIs as part of my work to build out the Streamdata.io API Gallery, and as I’m creating OpenAPI definitions for each API, I’m realizing the potential for event and topical driven streams across many existing web APIs. One thing I am doing after profiling each API is that I benchmark them to see how often the data changes, applying what we are calling StreamRank to each API path. Then I try to make sure all the parameters, and even enum values for each parameter are represented for each API definition, helping me see the ENTIRE surface area of an API. Which is something that really illuminates the possibilities surrounding each API.

After profiling the Stack Exchange Questions API, I began to see how much functionality and data is buried within a single API endpoint, and was something I wanted to expose and make much easier to access. Taking a single OpenAPI definition for the Stack Exchange Questions API:

Then exploding it into 25 separate tech topic streaming APIs. Taking the top 25 enum value for the tags parameter for the Stack Overflow site, and exploding into 25 separate streaming API resources. To do this, I’m taking each OpenAPI definition, and generating an AsyncAPI definition to represent each possible stream:

I’m not 100% sure I’m properly generating the AsyncAPI currently, as I’m still learning about how to use the topics and streams collections properly. However, the OpenAPI definition above is meant to represent the web API resource, and the AsyncAPI definition is meant to represent the streaming edition of the same resource. Something that can be replicated for any tag, or any site that is available via the Stack Exchange API. Turning the existing Stack Exchange API into streaming topic APIs, that people can subscribe to only the topics they are interested in receiving updates.

At this point I’m just experimenting with what is possible with OpenAPI and AsyncAPI specifications, and understanding what I can do with some of the existing APIs I am already using each day. I’m going to try and turn this into a prototype, delivering streaming APIs for all 25 of the top Stack Overflow tags. To demonstrate what is possible on Stack Exchange, but also to establish a proof of concept that I can apply to other APIs like Reddit, Github, and others. Then eventually automating the creation of streaming topic APIs using the OpenAPI definitions for common APIs, and the Streamdata.io service.


Proprietary Views Of Your Taxonomy

I’ve been investing a lot more energy into open data and APIs involved with city government, something I’ve dabbled in as long as I’ve been doing API Evangelist, but is something I’ve ratcheted up pretty significantly over the last couple of years. As part of this work, I’ve increasingly come across some pretty proprietary stances when it comes to data that is part of city operations–this stuff has been seen as gold, long before Silicon Valley came along, with long lines of folks looking to lock it up and control it.

Helping civic data stakeholders separate the licensing layers around their open data and APIs is something I do as the API Evangelist. Another layer I will be adding to this discussion is around taxonomy. How city data folks are categorizing, organizing, and classifying the valuable data needed to make our cities work. I’ve been coming across more vendors in the city open data world who feel their taxonomy is their secret sauce and falls under intellectual property protection. I don’t have any wisdom regarding why this is a bad idea, but I will keep writing about as part of my wider API licensing work to help flesh out my ideas, and create a more coherent and precise argument.

I understand that some companies put a lot of work into taxonomies, and the description behind how they organize things, but like API definitions and schema, these are aspects of your open data and API operations you want to be a widely understood, shared, and reusable knowledge within your systems, as well as the 3rd party integration of your partners and customers. Making your taxonomy proprietary isn’t going to help your brand, or get you ahead in the game. I recommend focusing on other aspects of the value you bring to the table and keep your taxonomy as openly licensed as you possibly can, encouraging adoption by others. I’ll work on a more robust argument as I work through a variety of projects that will potentially be hurt by the proprietary views on taxonomy I’m seeing.


KDL: A Graphical Notation for Kubernetes API Objects

I am learning about the Kubernetes Deployment Language (KDL) today, trying to understand their approach to defining their notion of Kubernetes API objects. It feels like an interesting evolution in how we define our infrastructure, and begin standardizing the API layer for it so that we can orchestrate as we need.

They are standardizing the Kubernetes API objects into the following buckets:

Cluster - The orchestration level of things. Compute - The individual compute level. Networking - The networking layer of it all. Storage - Storage behind our APIs.

This has elements of my API lifecycle research, as well as a containerized, clustered, BaaS 2.0 in my view. Standardizing how we define and describe the essential layers of our API and application infrastructure. I could also see standardizing the testing, monitoring, performance, security, and other critical aspects of doing this at scale.

I’m also fascinated at how fast YAML has become the default orchestration template language for folks in the cloud containerization space. I’ll add KDL to my API definition and container research and keep an eye on what they are up to, and keep an eye out for other approaches to standardizing the API layer for deploying, managing, and scaling our increasingly containerized API infrastructure.


Examples Of The OpenAPI Specification Used For Government APIs

I was answering some questions for my partners over at DreamFactory when it comes to APIs in government, and one of the questions they asked was about some examples of the OpenAPI specification being used in government. To help out, I started going through my list of government API looking for any examples in the wild–here is what I found:

I am sure there are more OpenAPI in use across government, but this is what I could find in a five-minute search of my API database. It provides us with seven quality examples of OpenAPI being used for documenting government APIs. I don’t see the OpenAPI used for much beyond documentation, but it is still a good start.

If you know of any government APIs that use OpenAPI feel free to let me know. I’d love to keep adding examples to my research so I can pull up quickly when I am asked questions like this in the future, and be able to highlight best practices for API operations in city, county, state, and federal levels of government.


Patent US 8997069: API Descriptions

There are so many API patents out there, I’m going to have to start posting one a day just to keep up. Lucky for you I begin to get really depressed by all the API patents I lose interest in reading them and begin to work harder looking for positive examples of API in the world, but until then here is today’s depressing as fuck API patent.

Title: API descriptions Number: US 8997069 Owner: Microsoft Technology Licensing, LLC Abstract: API description techniques are described for consumption by dynamically-typed languages. In one or more implementations, machine-readable data is parsed to locate descriptions of one or more application programming interfaces (APIs). The descriptions of the one or more application programming interfaces are projected into an alternate form that is different than a form of the machine-readable data.

I don’t mean to be a complete dick here, but why would you think this is a good idea? I get that companies want their employees to develop a patent portfolio, but this one is patenting an essential ingredient that makes APIs work. If you enforce this patent it will be worthless because this whole API thing won’t work, and if you don’t enforce it, it will be worthless because it does nothing–money well spent on the filing fee.

I just need to file my patent on patenting APIs and end all of this nonsense. I know y’all think I’m crazy for my beliefs that APIs shouldn’t be patented, but every time I dive into my patent research I can’t help but think y’all are the crazy ones, and I’m actually sane. I just do not understand how this patent is going to help anyone and represents any of the value that APIs and even a patent can bring to the table.


APIs Are How Our Digital Selves are Learning To Speak With Each Other

I know this will sound funny to many folks, but when I see APIs, I see language and communication, and humans learning to speak with each other in this new digital world we are creating for ourselves. My friend Erik Wilde (@dret) tweeted a reminder for me that APIs are indeed a language.

Every second on our laptops and mobile phone we are communicating with many different companies and individuals. With each wall post, Tweet, photo push, or video stream we are communicating with our friends, family, and the public. Each of these interactions is being defined and facilitated using an API. An API call just for saying something in text, in an image, or video. API is the digital language we use to communicate online and via our mobile devices.

Uber geeks like me spend their days trying to map out and understand these direct interactions, as well as the growing number of indirect interactions. For every direct communication, there are usually numerous other indirect communications with advertisers, platform providers, or maybe even law enforcement, researchers, or anyone else with access to the communication channels. We aren’t just learning to directly communicate, we are also being conditioned to participate indirectly in conversations we can’t see–unless you are tuned into the bigger picture of the API economy.

When we post that photo, companies are whispering about what is in the photo, where it was taken, and what meaning it has. When we share that news link of Facebook, companies have a discussion about the truthfulness and impact of the link, maybe the psychological profile behind the link and where we fit into their psychological profile database. In some scenarios, they are talking directly about us personally like we are sitting in the room, other times they are talking about us like we are just a number in a larger demographic pool.

In alignment with the real world, the majority of these conversations being held between men, behind closed doors. Publicly the conversations are usually directed by people with a bullhorn, talking over others, as well as whispering behind, and around people while they completely unaware that these conversations about them are even occurring. The average person is completely unaware these conversations are happening. They can’t hear the whispering, or just do not speak the language that is being used around them, about them, each moment of each day.

Those of us in the know are scrambling to understand, control, and direct the conversations that are occuring. There is a lot of money to be made when you are part of these conversations. Or at least have a whole bunch of people on your platform to have a conversation about, or around. People don’t realize that for every direct conversation you have online, there are probably 20 conversations going on about this conversation. What will they buy next? Who do they know? What is in that photo they just shared? Is this related post interesting to them? API-driven echoes of conversation upon conversations into infinity.

Sometimes I feel like Dr. Xavier from the X-men in that vault room connected to the machine when I am on the Internet studying APIs. I’m seeing millions of conversations going on–it is deafening. I don’t just see or hear the direct conversations, I hear the deafening sounds of advertisers, hackers, researchers, police, government, and everyone having a conversation around us. Many folks feel like the average person shouldn’t be included in the conversation–they do not have the interest or awareness to even engage. To me, it just feels like a new secretive world augmenting our physical worlds, where our digital selves are learning to speak with each other. What troubles me though, is that not everyone is actually engaged in the conversations they are included in, and are often asleep or sedated while their personal digital self is being manipulated, exploited, and p0wn3d.


The Github Repo Stripe Uses To Manage Their OpenAPI

I’m beating a drum every time I find a company managing their OpenAPI on Github, like we would the other code elements of our API operations. Today’s drumbeat comes from my friend Nicolas Grenié (@picsoung), who posted Stripe’s Github repository for their OpenAPI in our Slack channel for the super cool API Evangelists in the sector. ;-)

Along with the New York Times, Box, and other API providers, Stripe has a dedicated Github repo for managing their OpenAPI definition. This opens up the Stripe API for easily loading in client tools like Restlet Client, and Postman, as we as generating code samples and SDKs using services like APIMATIC. Most importantly, it allows for developers to easily understand the surface area of the Stripe API, in a way that is machine-readable, and portable.

It makes me happy to see leading API providers manage their own OpenAPI using Github like this. The API sector will be able to reach new heights when every single API provider manages their API definitions like this. I know, I know hypermedia folks–everyone should just do hypermedia. Yes, they should. However, we need some initial steps to occur before that is possible, and API providers being able to effectively communicate their API surface area to API consumers in a way that scales and can be used across the API lifecycle is an important part of this evolution. With each OpenAPI I find like this, I get more optimistic that we are getting closer to the future that RESTafarians and hypermedia folks envision–providers are doing the hard work of thinking about the definitions used in their APIs in the context of the entire API lifecycle, and the API consumers who exist along the way.


Managing Your Postman Collection Using Github

I have been encouraging API providers to publish and manage their API definitions using Github similar to how you’d manage any code. Companies like Box and NY Times are publishing their OpenAPI definitions to a single repository, allowing partners and API consumers to pull the latest version of the API definition and use throughout the API lifecycle.

I stumbled across another example of managing your API definitions using Github, but this time it is the management of your Postman Collections in a Github repo from API management provider Apigee (now Google). The Postman Collection provides a complete description of the Apigee API management surface area, allowing API providers to easily automate or orchestrate their API operations using Apigee.

The Github repository providers a complete Postman Collection, along with instructions on how to load, as well as a Run in Postman button allowing any consumer to instantly load the entire surface area of the Apigee API management solution into their Postman Client. I am a big fan of managing your Postman Collections, as well as OpenAPI definitions in this way, managing the definitions for your API similar to how you manage your code, but also making available for forking, checking out, and integration of these machine-readable definitions anywhere across the API lifecycle.


Adding Three APIMATIC OpenAPI Extensions To The OpenAPI Toolbox

I’ve added three OpenAPI extensions from APIMATIC to my OpenAPI Toolbox, adding to the number of extensions I’m tracking on that service providers and tooling developers are using as part of their API solutions. APIMATIC provides SDK code generation services, so their OpenAPI extensions are all about customizing how you deploy code as part of the integration process.

These are the three OpenAPI extensions I am adding from them:

  • x-codegen-settings - These settings are globally applicable to all operations and schema definitions.
  • x-operation-settings - These settings can be specified inside an "operation" object.
  • x-additional-headers - These headers are in addition to any headers required for authentication or defined as parameters.

If you have ever used APIMATIC you know that you can do a lot more than just “SDK generation”, which often has a bad reputation. APIMATIC provides some interesting ways you can use OpenAPI to dial in your SDK, script, and code generation as part of any continuous integration lifecycle.

Providing another example of how you don’t have to live within the constraints of the current OpenAPI spec. Anyone can augment, and extend the current specification to meet your unique needs. Then who knows, maybe it will become useful enough, and something that might eventually be added to the core specification. Which is part of the reason I’m aggregating these specifications, and including them in the OpenAPI Toolbox.


Every API Should Begin With A Github Repository

I’m working on my API definition and design strategy for my human services API work, and as I was doing this Box went all in on Opening, adding to the number of API providers I track on who not just have an OpenAPI but they also use Github as the core management for their API definition.

Part of my API definition and design advice for human service API providers, and the vendors who sell software to them is that they have an OpenAPI and JSON schema defined for their API, and share this either publicly or privately using a Github repository. When I evaluate a new vendor or service provider as part of the Human Services Data API (HSDA) specification I’m beginning to require that they share their API definition and schema using Github–if you don’t have one, I’ll create it for you. Having a machine-readable definition of the surface area of an API, and the underlying schema in a Github repo I can checkout, commit to, and access via an API is essential.

Every API should begin with a Github repository in my opinion, where you can share the API definition, documentation, schema, and have a conversation around these machine readable docs using Github issues. Approaching your API in this way doesn’t just make it easier to find when it comes to API discovery, but it also makes your API contract available at all steps of the API design lifecycle, from design to deprecation.


Craft Your API Design Guide So You Can Move To Other Areas of The Lifecycle

I am working on an API definition and design guide for my human services API work, helping establish a framework for approaching API design as part of the human services data and API specification, but also for implementers to follow in their own individual deployments. Every time I work on the subject of API design, I’m reminded of how far behind the API sector is when it comes to standardizing what it is we do.

Every month or so I see a new company publicly share their API design guide. When they do my friend Arnaud always adds to his API Stylebook, adding it to the wealth of information available in his work. I’m happy to see each API design guide release, but in reality, ALL API providers should have an API design guide, and they should also be open to publishing it publicly, showing their consumers they have their act together, and sharing with the wider API community the best practices in play.

The lack of companies sharing their API design practices and their API definitions is why we have such a deficiency when it comes to common API patterns in use. It is why we have so many variations of web APIs, as well as the underlying schema. We have an API industry because early practitioners like SalesForce, Amazon, eBay, Flickr, Delicious, Twitter, Youtube, and others were open with their API operations. People emulate what they see and know. Each wave of the API sector depends on the previous wave sharing what they do publicly–it is how this all works.

To demonstrate even further about how deficient we are, I do not find companies sharing their guides for API deployment, management, testing, monitoring, clients, and other stops along the API lifecycle. I’m glad we are seeing an uptick in the number of API design guides, but we need this practice to spread to every other stop. We need successful providers to share how they deploy their APIs, and when any company hires a new developer, you should ALWAYS be given a standard guide for deploying, managing, testing, as well as designing APIs.

It’s not rocket science, and honestly, it’s not even technical. It just means pausing for a moment, thinking about how we approach each stop in the API lifecycle, writing up an overview, publishing, and sharing it with API stakeholders, and even the wider API community. Every company doing APIs in 2017 should be crafting an API design guide so you can get to work on guides for the other areas of your lifecycle, thinking through and standardizing your approach, and making it known to every person involved–ideally, you are also being very public about all of this, and sharing your work with me and Arnaud, so we can get the word out about the good stuff you are up to! ;-)


Considering Using HTTP Prefer Header Instead Of Field Filtering For This API

I am working my way through a variety of API design considerations for the Human Services Data API (HSDA)that I’m working on with Open Referral. I was working through my thoughts on how I wanted to approach the filtering of the underlying data schema of the API, and Shelby Switzer (@switzerly) suggested I follow Irakli Nadareishvili’s advice and consider using RFC 7240 -the Prefer Header for HTTP, instead of some of the commonly seen approaches to filtering which fields are returned in an API response.

I find this approach to be of interest for this Human Services Data API implementation because I want to lean on API design, over providing parameters for consumers to dial in the query they are looking for. While I’m not opposed to going down the route of providing a more parameter based approach to defining API responses, in the beginning I want to carefully craft endpoints for specific use cases, and I think the usage of the HTTP Prefer Header helps extend this to the schema, allowing me to craft simple, full, or specialized representations of the schema for a variety of potential use cases. (ie. mobile, voice, bot)

It adds a new dimension to API design for me. Since I’ve been using OpenAPI I’ve gotten better at considering the schema alongside the surface area of the APIs I design, showing how it is used in the request and response structure of my APIs. I like the idea of providing tailored schema in responses over allowing consumers to dynamically filter the schema that is returned using request parameters. At some point, I can see embracing a GraphQL approach to this, but I don’t think that human service data stewards will always know what they want, and we need to invest in a thoughtful set design patterns that reflect exactly the schema they will need.

Early on in this effort, I like allowing API consumers to request minimal, standard or full schema for human service organizations, locations, and services, using the Prefer header, over adding another set of parameters that filter the fields–it reduces the cognitive load for them in the beginning. Before I introduce any more parameters to the surface area, I want to better understand some of the other aspects of taxonomy and metadata proposed as part of HSDS. At this point, I’m just learning about the Prefer header, and suggesting it as a possible solution for allowing human services API consumers to have more control over the schema that is returned to them, without too much overhead.


Some Thoughts On OpenAPI Not Being The Solution

I get regular waves of folks who chime in anytime I push on one of the hot-button topics on my site like hypermedia and OpenAPI. I have a couple of messages in my inbox regarding some recent stories I’ve done about OpenAPI recently, and how it isn’t sustainable, and we should be putting hypermedia practices to work. I’m still working on my responses, but I wanted to think through some of my thoughts here on the blog before I respond–I like to simmer on these things, releasing the emotional exhaust before I respond.

When it comes to the arguments from the hypermedia folks, the short answer is that I agree. I think many of the APIs I’m seeing designed using OpenAPI would benefit from some hypermedia patterns. However, there is such a big gap between where people are, and where we need to be for hypermedia to become the default, and getting people there is easier said than done. I view OpenAPI as a scaffolding or bridge to transport API designers, developers, and architects at scale from where we are, to where we need to be–at scale.

I wish I could snap my fingers and everyone understood how the web works and understood the pros and cons of each of the leading hypermedia types. Many developers do not even understand how the web works. Hell, I’m still learning new things every day, and I’ve been doing this full time for seven years. Most developers still do not even properly include and use HTTP status codes in their simple API designs, let alone understand the intricate relationship possibilities between their resources, and the clients that will be consuming them. Think about it, as developer, I don’t even have time, budget or care to articulate the details of why a response failed, and you expect that I have the time, budget, are care about link relations, and the evolution of the clients build on top of my APIs? Really?

I get it. You want everyone to see the world like you do. You are seeing us headed down a bad road, and want to do something about it. So do I. I’m doing something about it full time, telling stories about hypermedia APIs that exist out there, and even crafting my own hypermedia implementations, and telling the story around them. What are you doing to help equip folks, and educate them about best practices? I think once you hit the road doing this type of storytelling in a sustained way, you will see things differently, and begin to see how much investment we need in basic web literacy–something the OpenAPI spec is helping break down for folks, and providing them with a bridge to incrementally get from where they are at, to where they need to be.

Folks need to learn about consistent design patterns for their requests and responses. Think about getting their schema house in order. Think through content negotiation in a consistent way. Be able to see plainly that they only use three HTTP status codes 200, 404, and 500. OpenAPI provides a scaffolding for API developers to begin thinking about design, learn how the web works, and begin to share and communicate around what they’ve learned using a common language. It also allows them to learn from other API designers who are further along in their journey or just have had success with a single API pattern.

For you (the hypermedia guru), OpenAPI is redundant, meaningless, and unnecessary. For many others, it provides them with a healthy blueprint to follow and allows you to be spoon fed web literacy, and good API design practices in the increments your daily job, and current budget allows. Few API practitioners have the luxury of immersing them in deep thought about the architectural styles and the design of network-based software architectures or wading through W3C standards. They just need to get their job done, and the API deployed, with very little time for much else.

It is up to us leaders in the space to help provide them with the knowledge they’ll need. Not just in book format. Not everyone learns this way. We need to also point out the existing healthy patterns in use already. We need to invest in more blueprints, prototypes, and working examples of good API design. I think my default response for folks who tell me that OpenAPI isn’t a solution will be to ask them to send me a link to all the working examples and stories of the future they envision. I think until you’ve paid your dues in this area, you won’t see how hard it is to educate the masses and actually get folks headed in the right direction at scale.

I often get frustrated with the speed at which things move when I’m purely looking at things from what I want and what I know and understand. However, when I step back and think about what it will truly take to get the masses of developers and business users on-boarded with the API economy–the OpenAPI specification makes a lot more sense as a scaffolding for helping us get where we need. Oh, and also I watch Darrel Miller (@darrel_miller) push on the specification on a daily basis, and I know things are going to be ok.


Keen IO Pushing Forward The Data Schema Conversation

I wrote earlier this year that I would like us all to focus more on our schema and definitions of our data we use across API operations. Since then I’ve been keeping an eye out for any other interesting signs in this area like Postman with their data editor, and now I’ve come across the Streams Manager for inspecting the data schema of your event collections in Keen IO..

With Streams Manager you can:

  • Inspect and review the data schema for each of your event collections
  • Review the last 10 events for each of your event collections
  • Delete event collections that are no longer needed
  • Inspect the trends across your combined data streams over the last 30-day period

Keen IO provides us with an interesting approach to getting in tune with the schema across your event collections. I’d like to see more of this across the API lifecycle. I understand that companies like Runscope, Stoplight, Postman, and others already let us peek inside of each API call, which includes a look at the schema in play. This is good, but I’d like to see more schema management solutions at this layer helping API providers from design to deprecation.

In 2017 we all have an insane amount of bits and bytes flowing around us in our daily business operations. APIs are only enabling this to grow, opening up access to our bits to our partners and on the open web. We need more solutions like Keen’s Stream Manager, but for every layer of the API stack, allowing us to get our schema house in order, and make sense of the growing data bits we are producing, managing, and sharing.


A Working Example Of OpenAPI Version 3.0 To Learn From

I learn almost everything I know by reverse engineering what I come across doing my monitoring of the API space. I feel it is important for leaders in the API space to share a significant portion of our work publicly, and tell the story behind our work so that others can learn from us. Whether folks realize it or not, this is why much of the wider API community works as it does, and has accumulated much of the forward motion we all enjoy today.

I have been getting a lot of questions lately regarding tooling for version 3.0 of the OpenAPI specification. I’m working to evolve my OpenAPI toolbox, and populate with any services or tools I come across that support version 3.0. I have also been getting requests from folks about if I know where there are any working examples of OpenAPI definitions in the version 3.0 format. So I reached out to my friend Mike Ralphson (@PermittedSoc), who is doing interesting work when it comes to OpenAPI and GraphQL, and he responded with a working example for us of moving from version 2.0 to 3.0 of the OpenAPI spec.

He took the version 2.0 from the Authentiq Connect API:

Then he converted it to version 3.0:

Mike accomplished all of this with his OpenAPI 2.0 to 3.0 converter and validator that he has published to Github. His Github organization is always full of interesting work, I recommend staying in tune with whatever he is committing.

Thanks for sharing Mike! Hopefully, it provides a reference for folks seeking a working example of OpenAPI 3.0 out in the wild. I’ll be keeping an eye out for additional version 3.0 OpenAPI examples, as well as any other tooling like swagger2openapi that I can add to the toolbox.


Box Goes All In On OpenAPI

Box has gone all in on OpenAPI. They have published an OpenAPI for their document and storage API on Github, where it can be used in a variety of tools and services, as well as be maintained as part of the Box platform operations. Adding to the number of high-profile APIs managing their OpenAPI definitions on Github, like Box, and the NY Times.

As part of their OpenaPI release, Box published a blog post that touches on all the major benefits of having an OpenAPI, like forking on Github for integration into your workflow, generating documentation and visualizations, code, mock APIs, and even monitoring and testing using Runscope. It’s good to see a major API provider drinking the OpenAPI Kool-Aid, and working to reduce friction for their developers.

I would love to not be in the business of crafting complete OpenAPIs for API providers. I would like every single API provider to be maintaining their own OpenAPI on Github like Box and [NY Times(http://apievangelist.com/2017/03/01/new-york-times-manages-their-openapi-using-github/) does. Then I (we) could spend time indexing, curating, and developing interesting tooling and visualizations to help us tell stories around APIs, helping developers and business owners understand what is possible with the growing number of APIs available today.


Key Factors Determining Who Succeeds In The API and ML Marketplace Game

I was having a discussion with an investor today about the potential of algorithmic-centered API marketplaces. I’m not talking about API marketplaces like Mashape, I’m more talking about ML API marketplaces like Algorithmia. This conversation spans multiple areas of my API lifecycle research, so I wanted to explore my thoughts on the subject some more.

I really do not get excited about API marketplaces when you think just about API discovery–how do I find an API? We need solutions in this area, but I feel good implementations will immediately move from useful to commodity, with companies like Amazon already pushing this towards a reality.

There are a handful of key factors for determining who ultimately wins the API Machine Learning (ML) marketplace game:

  • Always Modular - Everything has to be decoupled and deliver micro value. Vendors will be tempted to build in dependency and emphasize relationships and partnerships, but the smaller and more modular will always win out.
  • Easy Multi-Cloud - Whatever is available in a marketplace has to be available on all major platforms. Even if the marketplace is AWS, each unit of compute has to be transferrable to Google or Azure cloud without ANY friction.
  • Enterprise Ready - The biggest failure of API marketplaces has always been being public. On-premise and private cloud API ML marketplaces will always be more successful that their public counterparts. The marketplace that caters to the enterprise will do well.
  • Financial Engine - The key to markets are their financial engines. This is one area AWS is way ahead of the game, with their approach to monetizing digital bits, and their sophisticated market creating pricing calculators for estimating and predicting costs gives them a significant advantage. Whichever marketplaces allows for innovation at the financial engine level will win.
  • Definition Driven - Marketplaces of the future will have to be definition driven. Everything has to have a YAML or JSON definition, from the API interface, and schema defining inputs and outputs, to the pricing, licensing, TOS, and SLA. The technology, business, and politics of the marketplace needs to be defined in a machine-readable way that can be measured, exchanged, and syndicated as needed.

Google has inroads into this realm with their GSuite Marketplace, and Play Marketplaces, but it feels more fragmented than Azure and AWS approaches. None of them are as far along as Algorithmia when it comes to specifically ML focused APIs. In coming months I will invest more time into mapping out what is available via marketplaces, trying to better understand their contents–whether application, SaaS, and data, content, or algorithmic API.

I feel like many marketplace conversations often get lost in the discovery layer. In my opinion, there are many other contributing factors beyond just finding things. I talked about the retail and wholesale economics of Algorithmia’s approach back in January, and I continue to think the economic engine will be one of the biggest factors in any API ML marketplace success–how it allows marketplace vendors to understand, experiment, and scale the revenue part of things without giving up to big of a slice of the pie.

Beyond revenue, the modularity and portability will be equally important as the financial engine, providing vital relief valves for some of the classic silo and walled garden effects we’ve seen the impact the success of previous marketplace efforts. I’ll keep studying the approach of smaller providers like Algorithmia, as well as those of the cloud giants, and see where all of this goes. It is natural to default to AWS lead when it comes to the cloud, but I’m continually impressed with what I’m seeing out of Azure, as well as feel that Google has a significant advantage when it comes to TensorFlow, as well as their overall public API experience–we will see.


Simple APIs With Jekyll And Github With Data Managed Via Google Spreadsheets

I'm always looking for simpler, and cheaper ways of doing APIs that can help anyone easily manage data while making it available in both a human and machine readable way--preferably something developers and non-developers both will find useful. I've pushed forward my use of Github when it comes to managing simple datasets, and have a new approach I want to share, and potentially use across other projects.

You can find a working example of this in action with my OpenAPI Toolbox, where I'm looking to manage and share a listing of tooling that is built on top of the OpenAPI specification. Like the rest of my API research, I am looking manage the data in a simple and cheap way that I can offload the storage, compute, and bandwidth to other providers, preferably ones that don't cost me a dime. While not a solution that would work in every API scenario, I am pretty happy with the formula I've come up with for my OpenAPI Toolbox.

Data Storage and Management In Google Sheets
The data used in the OpenAPI Toolbox comes from a public Google Sheet. I manage all the tools in the toolbox via this spreadsheet, tracking title, description, URL, image, organization, types, tags, and license using the spreadsheet. I have two separate worksheets, one of which tracks details on the organizations, and the other keeping track of each individual tool in the toolbox. This allows for the data to be managed by anyone I give access to the sheet using Google Docs, offloading storage and data management to Google. Sure, it has its limitations, but for simple datasets, it works better than a database in my opinion.

Website and Basic API Hosting Using Github
First, and foremost the data in the OpenAPI Toolbox is meant to be accessible by any human on the web. Github, using their Github Pages solution, combined with the static website tool Jekyll, provides a rich environment for managing this data-driven toolbox. Jekyll provides the ability to store YAML data in its _data folder, which I can then use across static HTML pages which display the data using Liquid syntax. This approach to managing data allows for easy publishing of static representations in HTML, JSON, and YAML, making the data easily consumable by humans and machines, in an environment that is version controlled, forkable, and free for publicly available projects.

JavaScript Spreadsheet Sync To YAML Data Store
To keep the data in the Google Spreadsheet in sync with the YAML data store in the Github hosted repository I use a simple JavaScript driven page on the website. To make it work you have to provide a valid Github OAuth token to be passed along as query string like this http://openapi.toolbox.apievangelist.com/pull-spreadsheet/?token=[github token]. The token can be acquired by doing the usual OAuth dance with Github or using the Github account of any user where you can issue personal tokens. If the user is a valid contributor on the repository, the JavaScript will pull a recent copy of the data in the Google Spreadsheet, and publish as YAML in the _data folder for the toolbox repository successfully--otherwise, it just throws an error.

HTML Toolbox For Humans To Browse The Toolbox
Jekyll provides a static website that acts as the public face for the OpenAPI Toolbox. The home page provides icon links to each of the types of tools I have indexed, as well as to specific tags I've selected, such as the programming language of each tool. Each page of the website is an HTML page that uses Liquid to display data stored in the central YAML store. Liquid handles the filtering of data by type, tags, or any other criteria I choose. Soon I will be adding a search, and other ways to browse the data in the toolbox as the data store grows, and I obtain more data points to slice and dice things on. 

JSON API To Put To Use Across Other Applications
To provide the API-esque functionality I also use Liquid to display data from the YAML data store, but instead of publishing as HTML, I publish as JSON, essentially providing a static API facade. The primary objective of this type of implementation is to allow for GET requests on a variety of machine-readable paths for the toolbox. I published a full JSON listing of the entire toolbox, as well as more precise paths for getting at types of tools, and specific programming language tools. Similar to the human side of the toolbox, I will be adding more paths as more data points become available in the OpenAPI toolbox data store.

Documentation Using Liquid and OpenAPI Definition
Rather than just making the data available via JSON files, I wanted to also provide simple API documentation demonstrating what was possible with the data stored in the toolbox. I crafted an OpenAPI for the OpenAPI Toolbox API, providing a machine readable definition of all the paths available. Again, using Liquid I generate simple API documentation and schema, that actually allows you to make calls against the API, using a simple interactive JavaScript interface. While the OpenAPI Toolbox is technically static, using Liquid and OpenAPI I was able to mimic much of the functionality developers are used to when it comes to API integration.

Project Support and Road Map Using Github Issues
As with all of my projects I am using the underlying issue management system to help me manage support and the roadmap for the project. Anyone can submit an issue regarding a tool they'd like to see in the toolbox, regarding API integration, or possibly new APIs they would like to see published. I can use the Github issue management to handle general support requests, and communication around the project, as well as incrementally manage the data, schema, website, and API for the toolbox. 

Indexed In Machine Readable Way With APIs.json
The entire project is indexed using APIs.json, providing metadata for the project as well as other indexes for the API, support, and other aspects of operating the project. APIs.json is meant to provide a machine readable index for not just the API, which is already defined using OpenAPI, but for the rest of the project, including documentation and support, and eventually a road map, blog, and other supporting elements. Using the APIs.json index, other systems can easily discover the API, and programmatically access the data via the APIs, or even access the repository for the spreadsheet via the Github API, or the Google Sheet via its API--all the information is available in the APIs.json for use.

A Free Forkable Way To Manage Simple Data And APIs
This approach to doing APIs won't' be something you will want to do for every API implementation, but for simple data-driven projects like the OpenAPI Toolbox, it works great. Google Sheets and Github are both free, so I am offloading the storage, hosting, and bandwidth to another provider, while I am managing things in a way that any tech-savvy user could engage with. Anyone could manage entries in the toolbox using the Google Sheet and even engage with humans, and other applications via the published project toolbox.

I am going to continue evolving this approach to fit some of my other data-driven projects. I really like having all my projects self-contained as individual repositories, and the public side of things running using Jekyll--the entire API Evangelist network runs this way. I also like having the data managed in individual Google Sheets like this. it gives me simple data stores that I can easily manage with the help of other data stewards. Each of the resulting projects exists as a static representation of each data set--in this case an OpenAPI toolbox. I have many other toolboxes, toolkits, curriculum, and API research areas that are data driven, and will benefit from this approach.

What really makes me smile about this is that each project has an API representation of its core. Sure, I don't have POST, PUT, and DELETE capabilities for these APIs, or even advanced search capabilities, but for projects that are heavily read only--this works just fine. If you think about it though, I can easily access the data for reading and writing using the Google Sheets or Github APIs, depending on which layer I want to get access at. Really I am just looking to allow for easy management of data using Google Sheets, and simple publishing as YAML, JSON, and HTML, so that humans can browse, as well as put to use in other applications.


Adding An Extensions Category To The OpenAPI Toolbox

I added another type of tool to my OpenAPI Toolbox, this time it is extensions. They used to be called Swagger vendor extensions, and now they are simply called OpenAPI extensions, which allow any implementor to extend the schema outside the current version of the API specification. All you do to add an OpenAPI extension is prepend x- to any value that you wish to include in your OpenAPI, and the validator will overlook it as part of the specification.

I have a whole list of vendor extensions I'd like to add, but I've started with a handful from Microsoft Flow, and my friends over at APIMATIC. Two great examples of how OpenAPI extensions can be used in the API lifecycle. In this case, one is for integration platform as a service (iPaaS), and the other is SDK generation and continuous integration. Both vendors needed to extend the specification to meet a specific need, so they just extended it as required--you can find the extensions in the new section of the toolbox.

My goal in adding the section to the OpenAPI toolbox is to highlight how people are evolving the specification outside the core working group. While some of the extensions are pretty unique, some of them have a potential common purpose. I will be adding some discovery focused extensions next week from the OpenAPI directory APIs.guru, which I will be adopting and using in my own definitions to help me articulate the provenance of any OpenAPI definition in my catalog(s). Plus, I find it to be a learning experience to see how different vendors are putting them to work. 

If you know of any OpenAPI extensions that are not in the toolbox currently feel free to submit an issue on the Github repository for the project. I'd like to evolve the collection to be a comprehensive look at how OpenAPI extensions are being used across the sector, from a diverse number of providers. I'm going to be teaching my own OpenAPI crawler to identify extensions within any OpenAPI it comes across, and automatically submit an issue on the toolbox, rapidly expanding the discovery of how they are used across a variety of implementations in coming months.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.