API Governance News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API governance converstation. A converstion that is still growing, but has been getting more roots lately as mainstream companies push to adopt APIs.

Github Is Your Asynchronous Microservice Showroom

When you spend time looking at a lot of microservices, across many different organizations, you really begin to get a feel for the ones who have owners / stewards that are thinking about the bigger picture. When people are just focused on what the service does, and not actually how the service will be used, the Github repos tend to be cryptic, out of sync, and don’t really tell a story about what is happening. Github is often just seen as a vehicle for the code to participate in a pipeline, and not about speaking to the rest of the humans and systems involved in the overall microservices concert that is occurring.

Github Is Your Showroom Each microservices is self-contained within a Github repository, making it the showcase for the service. Remember, the service isn’t just the code, and other artifacts buried away in the folders for nobody to understand, unless you understand how to operate the service or continuously deploy the code. It is a service. The service is part of a larger suite of services, and is meant to be understood and reusable by other human beings in the future, potentially long after you are gone, and aren’t present to give a 15 minute presentation in a meeting. Github is your asynchronous microservices showroom, where ANYONE should be able to land, and understand what is happening.

README Is Your Menu The README is the centerpiece of your showroom. ANYONE should be able to land on the README for your service, and immediately get up to speed on what a service does, and where it is in its lifecycle. README should not be written for other developers, it should be written for other humans. It should have a title, and short, concise, plain language description of what a service does, as well as any other relevant details about what a service delivers. The README for each service should be a snapshot of a service at any point in time, demonstrating what it delivers currently, what the road map contains, and the entire history of a service throughout its lifecycle. Every artifact, documentation, and relevant element of a service should be available, and linked to via the README for a service.

An OpenAPI Contact Your OpenAPI (fka Swagger) file is the central contract for your service, and the latest version(s) should be linked to prominently from the README for your service. This JSON or YAML definition isn’t just some output as exhaust from your code, it is the contract that defines the inputs and outputs of your service. This contract will be used to generate code, mocks, sample data, tests, documentation, and drive API governance and orchestration across not just your individual service, but potentially hundreds or thousands of services. An update to date, static representation of your services API contract should always be prominently featured off the README for a service, and ideally located in a consistent folder across services, as services architects, designers, coaches, and governance people will potentially be looking at many different OpenAPI definitions at any given moment.

Issues Dirven Conversation Github Issues aren’t ust for issues. It is a robust, taggable, conversational thread around each individual service. A github issues should be the self-contained conversation that occurs throughout the lifecycle of a service, providing details of what has happened, what the current issues are, and what the future road map will hold. Github Isuses are designed to help organize a robust amount of conversational threads, and allow for exploration and organization using tags, allowing any human to get up to speed quickly on what is going on. Github issues should be an active journal for the work being done on a service, through a monologue by the service owner / steward, and act as the feedback loop with ALL OTHER stakeholders around a service. Github issues for each service will be be how the antrhopolgists decipher the work you did long after you are gone, and should articulate the entire life of each individual service.

Services Working In Concert Each microservice is a self-contained unit of value in a larger orchestra. Each microservice should do one thing and do it well. The state of each services Github repository, README, OpenAPI contract, and the feedback loop around it will impact the overall production. While a service may be delivered to meet a specific application need in its early moments, the README, OpenAPI contract, and feedback loop should attract and speak to any potential future application. A service should be able to reused, and remixed by any application developer building internal, partner, or some day public applications. Not everyone landing on your README will have been in the meetings where you presented your service. Github is your service’s showroom, and where you will be making your first, and ongoing impression on other developers, as well as executives who are poking around.

Leading Through API Governance Your Github repo, README, and OpenAPI contract is being used by overall microservices governance operations to understand how you are designing your services, crafting your schema, and delivering your service. Without an OpenAPI, and README, your service does nothing in the context of API governance, and feeding into the bigger picture, and helping define overall governance. Governance isn’t scripture coming off the mountain and telling you how to operate, it is gathered, extracted, and organized from existing best practices, and leadership across teams. Sure, we bring in outside leadership to help round off the governance guidance, but without a README, OpenAPI, and active feedback loop around each service, your service isn’t participating in the governance lifecycle. It is just an island, doing one thing, and nobody will ever know if it is doing it well.

Making Your Github Presence Count Hopefully this post helps you see your own microservice Github repository through an external lens. Hopefully it will help you shift from Github being just about code, for coders, to something that is part of a larger conversation. If you care about doing microservices well, and you care about the role your service will play in the larger application production, you should be investing in your Github repository being the showroom for your service. Remember, this is a service. Not in a technical sense, but in a business sense. Think about what good service is to you. Think about the services you use as a human being each day. How you like being treated, and how informed you like to be about the service details, pricing, and benefits. Now, visit the Github repositories for your services, think about the people who who will be consuming them in their applications. Does it meet your expectations for a quality level of service? Will someone brand new land on your repo and understand what your service does? Does your microservice do one thing, and does it do it well?

An OpenAPI Vendor Extension For Defining Your API Au

The food delivery service Zalando has an interesting approach to classifying their APIs based upon who is consuming them. It isn’t just about APIs being published publicly, or privately, they actually have standardized their definition, and have established an OpenAPI vendor extension, so that the definition is machine readable and available via their OpenAPI.

According to the Zalando API design guide, “each API must be classified with respect to the intended target audience supposed to consume the API, to facilitate differentiated standards on APIs for discoverability, changeability, quality of design and documentation, as well as permission granting. We differentiate the following API audience groups with clear organisational and legal boundaries.

  • component-internal - The API consumers with this audience are restricted to applications of the same functional component (internal link). All services of a functional component are owned by specific dedicated owner and engineering team. Typical examples are APIs being used by internal helper and worker services or that support service operation.
  • business-unit-internal - The API consumers with this audience are restricted to applications of a specific product portfolio owned by the same business unit.
  • company-internal - The API consumers with this audience are restricted to applications owned by the business units of the same the company (e.g. Zalando company with Zalando SE, Zalando Payments SE & Co. KG. etc.)
  • external-partner - The API consumers with this audience are restricted to applications of business partners of the company owning the API and the company itself.
  • external-public - APIs with this audience can be accessed by anyone with Internet access.

Note: a smaller audience group is intentionally included in the wider group and thus does not need to be declared additionally. The API audience is provided as API meta information in the info-block of the Open API specification and must conform to the following specification:

#/info/x-audience: type: string x-extensible-enum: - component-internal - business-unit-internal - company-internal - external-partner - external-public description: | Intended target audience of the API. Relevant for standards around quality of design and documentation, reviews, discoverability, changeability, and permission granting.

Note: Exactly one audience per API specification is allowed. For this reason a smaller audience group is intentionally included in the wider group and thus does not need to be declared additionally. If parts of your API have a different target audience, we recommend to split API specifications along the target audience — even if this creates redundancies (rationale).

Here is an example of the OpenAPI vendor extension in action, as part of the info block:

swagger: ‘2.0’ info: x-audience: company-internal title: Parcel Helper Service API description: API for <…> version: 1.2.4

Providing a pretty interesting way of establishing the scope and reach of each API in a way that makes each API owner think deeply about who they are / should be targeting with the service. Done in a way that makes the audience focus machine readable, and available as part of it’s OpenAPI definition which can be then used across discovery, documentation, and through API governance and security.

I like the multiple views of who the audience could be, going beyond just public and private APIs. I like that it is an OpenAPI vendor extension. I like that they even have a schema crafted for the vendor extension–another interesting concept I’d like to see more of. Overall, making for a pretty compelling approach to define the reach of our APIs, and quantifying the audience we are looking to reach with each API we publish.

API As A Product Principles From Zalando

As I’m working through the API design guides from API leaders, looking for useful practices that I can include in my own API guidance, I’m finding electronic commerce company Zalando’s API design guide full of some pretty interesting advice. I wanted to showcase the section about their API as a product principles, which I think reflects what I hear many companies striving for when they do APIs.

From the Zalando API design guide principles:

Zalando is transforming from an online shop into an expansive fashion platform comprising a rich set of products following a Software as a Platform (SaaP) model for our business partners. As a company we want to deliver products to our (internal and external) customers which can be consumed like a service.

Platform products provide their functionality via (public) APIs; hence, the design of our APIs should be based on the API as a Product principle:

  • Treat your API as product and act like a product owner
  • Put yourself into the place of your customers; be an advocate for their needs
  • Emphasize simplicity, comprehensibility, and usability of APIs to make them irresistible for client engineers
  • Actively improve and maintain API consistency over the long term
  • Make use of customer feedback and provide service level support

RESTful API as a Product makes the difference between enterprise integration business and agile, innovative product service business built on a platform of APIs.

Based on your concrete customer use cases, you should carefully check the trade-offs of API design variants and avoid short-term server side implementation optimizations at the expense of unnecessary client side obligations and have a high attention on API quality and client developer experience.

API as a Product is closely related to our API First principle which is more focused on how we engineer high quality APIs.

Zalando provides a pretty coherent vision for how we all should be doing APIs. I like this guidance because it helps quantify something we hear a lot–APIs as a product. However, it also focuses in on what is expected of the product owners. It also gets at why companies should be doing APIs in the first place, talking about the benefits they bring to the table.

I’m enjoying the principles section of Zalando’s API design guide. It goes well beyond just API design, and reflects what I consider to be principles for wider API governance. Many companies are still considering this API design guidance, but I find that companies who are publishing these documents publicly are often maturing and moving beyond just thinking deeply about design–providing a wealth of other wisdom when it comes to doing APIs right.

An OpenAPI-Driven, API Governance Rules Engine

Phil Sturgeon (@philsturgeon) alerted me to a pretty cool project he is cooking up, called Speccy. Which provides a rules engine for validating your OpenAPI definitions. “Taking off from where Mike Ralphson started with linting in swagger2openapi, Speccy aims to become the rubocop or eslint of OpenAPI”, and to “sniff your files for potentially bad things. “Bad” is objective, but you’ll see validation errors, along with special rules for making your APIs better.” Helping make sure your API definitions are as consistent as they possibly can be, and deliver on your API governance strategy (you have one right?)

With Speccy, there are a default set of rules, things like ensuring you have a summary or a description for each API path:

``{ “name”: “operation-summary-or-description”,

"object": "operation",

"enabled": true,

"description": "operation should have summary or description",

"or": ["summary", "description"]        


Or making sure you add descriptions to your parameters:


"name": "parameter-description",

"object": "parameter",

"enabled": true,

"description": "parameter objects should have a description",

"truthy": "description"


Or making sure you include tags for each aPI path:

``{ “name”: “operation-tags”,

"object": "operation",

"enabled": true,

"description": "operation should have non-empty tags array",

"truthy": "tags",

"skip": "isCallback"

}`` Then you can get more strict by requiring contact information:

``{ “name”: “contact-properties”,

"object": "contact",

"enabled": true,

"description": "contact object should have name, url and email",

"truthy": [ "name", "url", "email" ]


And make sure youi have a license applied to your API:

``{ “name”: “license-url”,

"object": "license",

"enabled": true,

"description": "license object should include url",

"truthy": "url"


Speccy is available as a Node package, which you can easily run at the command line. Speccy is definitely what is needed out there right now, helping us validate the growing number of OpenAPI definitions in our life. As many companies are thinking about how they can apply API governance across their operations, they should be looking at contributing to Speccy. It is something I’ve been talking with API service providers about for some time, but haven’t seen an open source answer emerge, that can help us develop rules for what we expect of our OpenAPI definitions.

My only feedback right now, is that we need lots of people using it, and helping contribute rules. Oh, and wrap it in an API, and make it available as an easy to use, and deploy containerized microservice. Then lets get to work on the Github Gist dirven marketplace of rules, where I can publish the rules I develop across the projects I’m working on, and of the clients I consult with. Let’s get to work making sure there are a wealth of rules, broken down into different categories for API providers to choose from. Then let’s get API tooling and service providers to begin baking a Speccy rules engine into their solutions, and allow for the import and management of open source rules.

Speccy only works with OpenAPI 3.0, which makes sense if we are going to be moving forward with this conversation. Spreccy is how we will validate that banking APIs are PSD2 compliant. It is how we will ensure healthcare APIs support the FHIR specification. I have other suggestions for the CLI and API usage of Speccy, but I’d rather see investment in the available rules, before I make too many functional suggestions. I think the rules are where we will begin to define what we are looking for in an OpenAPI rules engine, and that should drive the Speccy features which end up on the road map.

Three Areas I Would Like To Cover When We Sit Down For An API Consulting Session

I’m putting together some presentations for a handful of upcoming engagements, where I’m wanting to help my audience understand what an initial engagement will look like. While I am looking to have just a handful of bullets that can live on a single, or handful of slides, I also want a richer narrative to go along with it. To achieve this I rely on my blog, which helps me work my way through the details of what I do, and distill things down into something that I can deliver on the ground within the companies, organizations, institutions, and government agencies I am conducting business with.

When I am sitting down with a new audience, and working to help them understand how I can help them begin, jumpstart, revive, and move forward with their API journey, I’m usually breaking things into three main areas:

  • Landscape Mapping - Establish a map of what currently is within an organization.
    • Internal Resources - What existing web services, APIs, teams, and resources exist?
    • External Objectives - What are the external objectives of doing APIs?
  • Strategy Development - Craft a coherent strategy for moving forward with APIs.
    • API Lifecycle - Lay out a step by step list of stops along a modern API life cycle.
    • API Support - Identify how the strategy and operations will be supported within an organization.
    • API Evangelism - Consider how the message around API operations will spread internally, and externally.
  • Execution - Identify a clear set of next steps regarding how APIs will evolve.
    • Infrastructure - What services, tooling, and other API infrastructure is needed?
    • Resources - What resources have been identified for moving the API conversation forward?
    • Governance - What is the governance strategy for measuring, reporting upon, and enforcing the deliver of APIs across the API lifecycle presented.

When I present to a new group of people within an organization, this is the outline I am looking to flesh out. I have to understand what is already occurring (or not) on the ground, which is why I need the landscape map. Then, borrowing from my existing API research I can help develop a a detailed strategy, which includes the critical elements of how we will be supporting and evangelizing the effort–which without, API efforts will always struggle. After that, I want to quickly get to work on how we will be executing on this vision, even if it just involves more investment in the landscape map, and overall strategy.

I am working on more detailed materials to hand out prior to, and at the time I sit down with new clients, but I wanted to articulate in a single page, and using a simple set of bullets what I am looking to accomplish with any new consulting relationship. With a map in hand, and an strategy in mind, I’m confident that I can help folks I talk with move forward with their API journey in a more meaningful way. Something not everyone I talk with is confident in doing on their own, but with a little assistance, I’m pretty sure they will be able to get to work defining what the API journey will look like for their organization.

A Summary Of Kong As An API Management Solution

I was breaking down what the API management solution Kong delivers for a customer of mine, and I figured I’d take what I shared via the team portal, and publish here on the blog. It is an easy way for me to create content, and make my consulting work more transparent here on the blog. I am using Kong as part of several healthcare and financial projects currently, and I am actively employing it to ensure customers are properly managing their APIs. I wasn’t the decision maker on any of these projects when it came to choosing the API management layer, I am just the person who is helping standardize how they are using API services and tooling across the API life cycle for these projects.

First, Kong is an open source API management solution with an easy to install community edition, and enterprise level support when needed. They provide an admin interface, and developer portal for the API management proxy, but there is also a growing number of community editions like KongDash, and Konga emerging to make it a much more richer ecosystem. And of course, Kong has an API for managing the API management layer, as every API service and tooling provider should have.

Now, let’s talk about what Kong does for helping in the deploying of your APIs:

  • API Routing - The API object describes an API that’s being exposed by Kong. Kong needs to know how to retrieve the API when a consumer is calling it from the Proxy port. Each API object must specify some combination of hosts, uris, and methods
  • Consumers - The Consumer object represents a consumer - or a user - of an API. You can either rely on Kong as the primary datastore, or you can map the consumer list with your database to keep consistency between Kong and your existing primary datastore.
  • Certificates - A certificate object represents a public certificate/private key pair for an SSL certificate.
  • Server Name Indication (SNI) - An SNI object represents a many-to-one mapping of hostnames to a certificate.

Then it focuses on the core aspects of what is needed to help manage your APIs:

  • Authentication - Protect your services with an authentication layer.
  • Traffic Control - Manage, throttle, and restrict inbound and outbound API traffic.
  • Analytics - Visualize, inspect, and monitor APIs and microservice traffic.
  • Transformations - Transform requests and responses on the fly.
  • Logging - Stream request and response data to logging solutions.

After that, it has a bunch of added features to help make it a scalable, evolvable solution:

  • DNS-based loadbalancing - When using DNS based load balancing the registration of the backend services is done outside of Kong, and Kong only receives updates from the DNS server.
  • Ring-balancer - When using the ring-balancer, the adding and removing of backend services will be handled by Kong, and no DNS updates will be necessary.
  • Clustering - A Kong cluster allows you to scale the system horizontally by adding more machines to handle more incoming requests. They will all share the same configuration since they point to the same database. Kong nodes pointing to the same datastore will be part of the same Kong cluster.
  • Plugins - lua-nginx-module enables Lua scripting capabilities in Nginx. Instead of compiling Nginx with this module, Kong is distributed along with OpenResty, which already includes lua-nginx-module. OpenResty is not a fork of Nginx, but a bundle of modules extending its capabilities.
  • API - Administrative API access for programmatic control.
  • CLI Reference - The provided CLI (Command Line Interface) allows you to start, stop, and manage your Kong instances. The CLI manages your local node (as in, on the current machine).
  • Serverless - Invoke serverless functions via APIs.

There are a number of API management solutions available out there today. I will profile each one I am actively using as part of my work on the ground. I’m agnostic towards which provider my clients should use, but I like having the details about what features they bring to the table readily available via a single URL, so that I can share when these conversations come up. I have many API management solutions profiled as part of my API management research, but in 2018 there are just a handful of clear leaders in the game. I’ll be focusing on the ones who are still actively investing in the API community, and the ones I have an existing relationship with in a partnership capacity. is a reseller of Kong in France, making it something I’m actively working with in the financial space, and also something I’m using within the federal government, also bringing it front and center for me in the United States.

If you have more questions about Kong, or any other API management solution, feel free to reach out, and I’ll do my best to answer any questions. We are also working to provide more API life cycle, strategy, and governance services along with my government API partners at Skylight, and through my mainstream API partners at If you need help understanding the landscape and where API management solutions like Kong fits in, me and my partners are happy to help out–just let us know.

Streaming And Event-Driven Architecture Represents Maturity In The API Journey

Working with has forced a shift in how I see the API landscape. When I started working with their proxy I simply saw it about doing API in real time. I was hesitant because not every API had real time needs, so I viewed what they do as just a single tool in my API toolbox. While Server-Sent Events, and proxying JSON APIs is just one tool in my toolbox, like the rest of the tools in my toolbox it forces me to think through what an API does, and understand where it exists in the landscape, and where the API provider exists in their API journey. Something I’m hoping the API providers are also doing, but I enjoy doing from the outside-in as well.

Taking any data, content, media, or algorithm and exposing as an API, is a journey. It is about understanding what that resource is, what it does, and what it means to the provider and the consumer. What this looks like day one, will be different from what it looks like day 365 (hopefully). If done right, you are engaging with consumers, and evolving your definition of the resource, and what is possible when you apply it programmatically through the interfaces you provide. API providers who do this right, are leveraging feedback loops in place with consumers, iterating on their APIs, as well as the resources they provide access to, and improving upon them.

Just doing simple web APIs puts you on this journey. As you evolve along this road you will begin to also apply other tools. You might have the need for webhooks to start responding to meaningful events that are beginning to emerge across the API landscape, and start doing the work of defining your event-driven architecture, developing lists of most meaningful topics, and events that are occurring across your evolving API platform. Webhooks provide direct value by pushing data and content to your API consumers, but they have indirect value in helping you define the event structure across your very request and response driven resource landscape. Look at Github webhook events, or Slack webhook events to understand what I mean.

API platforms that have had webhooks in operation for some time have matured significantly towards and event-driven architecture. Streaming APIs isn’t simply a boolean thing. That you have data that needs to be streamed, or you don’t. That is the easy, lazy way of thinking about things. Server-Sent Events (SSE) isn’t just something you need, or you don’t. It is something that you are ready for, or you aren’t. Like webhooks, I’m seeing Server-Sent Events (SSE) as having the direct benefits of delivering data and content as it is updated, to the browser or for other server uses. However, I’m beginning to see the other indirect benefits of SSE, and how it helps define the real time nature of a platform–what is real time? It also helps you think through the size, scope, and efficiency surrounding the use of APIs for making data, content, and algorithms available via the web. Helping us think through how and when we are delivering the bits and bytes we need to get business done.

I’m learning a lot by applying to simple JSON APIs. It is adding another dimension to the API design, deployment, and management process for me. There has always been an evolutionary aspect of doing APIs for me. This is why you hear me call it the API journey on a regular basis. However, now that I’m studying event-driven architecture, and thinking about how tools like webhooks and SSE assist us in this journey, I’m seeing an entirely new maturity layer for this API journey emerge. It goes beyond just getting to know our resources as part of the API design, and deployment process. It builds upon API management and monitoring and helps us think through how our APIs are being consumed, and what the most meaningful and valuable events are. Helping us think through how we deliver data and content over the web in a more precise manner. It is something that not every API provider will understand right away, and only those a little further along in their journey will be able to take advantage of. The question is, how do we help others see the benefits, and want to do the hard work to get further along in their own API journey.

You Have to Know Where All Your APIs Are Before You Can Deliver On API Governance

I wrote an earlier article that basic API design guidelines are your first step towards API governance, but I wanted to introduce another first step you should be taking even before basic API design guides–cataloging all of your APIs. I’m regularly surprised by the number of companies I’m talking with who don’t even know where all of their APIs are. Sometimes, but not always, there is some sort of API directory or catalog in place, but often times it is out of date, and people just aren’t registering their APIs, or following any common approach to delivering APIs within an organization–hence the need for API governance.

My recommendation is that even before you start thinking about what your governance will look like, or even mention the word to anyone, you take inventory of what is already happening. Develop an org chart, and begin having conversations. Identify EVERYONE who is developing APIs, and start tracking on how they are doing what they do. Sure, you want to get an inventory of all the APIs each individual or team is developing or operating, but you should also be documenting all the tooling, services, and processes they employ as part of their workflow. Ideally, there is some sort of continuous deployment workflow in place, but this isn’t a reality in many of the organization I work with, so mapping out how things get done is often the first order of business.

One of the biggest failures of API governance I see is that the strategy has no plan for how we get from where we are to where we ant to be, it simply focuses on where we want to be. This type of approach contributes significantly to pissing people off right out of the gate, making API governance a lot more difficult. Stop focusing on where you want to be for a moment, and focus on where you are. Build a map of where people are, tools, services, skills, best and worst practices. Develop a comprehensive map of where organization is today, and then sit down with all stakeholders to evaluate what can be improved upon, and streamlined. Beginning the hard work of building a bridge between your existing teams and what might end up being a future API governance strategy.

API design is definitely the first logical step of your API governance strategy, standardizing how you design your APIs, but this shouldn’t be developed from the outside-in. It should be developed from what already exists within your organization, and then begin mapping to healthy API design practices from across the industry. Make sure you are involving everyone you’ve reached out to as part of inventory of APIs, tools, services, and people. Make sure they have a voice in crafting that first draft of API design guidelines you bring to the table. Without buy-in from everyone involved, you are going to have a much harder time ever reaching the point where you can call what you are doing governance, let alone seeing the results you desire across your API operations.

Basic API Design Guidelines Are Your First Step Towards API Governance

I am working with a group that has begun defining their API governance strategy. We’ve discussed a full spectrum of API lifecycle capabilities that need to be integrated into their development practices, and CI/CD workflow, as well as eventually their API governance documentation. However, they are just getting going with the concept of API governance, and I want to make sure they don’t get ahead of themselves and start piling in too much into their API governance documentation, before they can get buy in, and participation from other groups.

We are approaching the first draft of an API governance document for the organization, and while it has lofty aspirations, the first draft is really nothing more than some basic API design guidelines. It is basically a two-page document that explains why REST is good, provides guidance on naming paths, using your verbs, and a handful of other API design practices. While I have a much longer list of items I want to see added to the document, I feel it is much more important to get the basic first draft up, circulated amongst groups, and establishing feedback loops, than making sure the API governance document is comprehensive. Without buy-in from all groups, any API governance strategy will be ignored, and ultimately suffocated by teams who feel like they don’t have any ownership in the process.

I am lobbying that the API governance strategy be versioned and evolved much like any other artifact, code, or documentation applied across API operations. This is v1 of the API governance, and before we can iterate towards v2, we need to get feedback, accept issues, comments, and allow for pull requests on the strategy before it moves forward. It is critical that ALL teams feel like they have been part of the conversation from day one, otherwise it can be weakened as a strategy, and any team looking to implement, coach, advise, report on, and enforce will be hobbled. API governance advocates always wish for things to move forward at a faster speed, but the reality within large organizations will require more consensus, or at least involvement, which will move forward at a variety of speeds depending on the size of the organization.

This process has been a reminder for me, and hopefully for my readers who are looking to get started on their API governance strategy. Always start small. Get your first draft up. Start with the basics of how you define and design your APIs, and then begin to flesh out the finer details of design, deployment, management, testing, and the other stops along your lifecycle. Just get your basic version documentation and guidance published. Maybe even consider calling it something other than governance from day one. Come up with a much more friendly name, that might not turn your various teams off, and then once it matures you can call it what it is, after everyone is participating, and has buy-in regarding the overall API governance strategy for your platform.

From CI/CD To A Continuous Everything (CE) Workflow

I am evaluating an existing continuous integration and deployment workflow to make recommendations regarding how they can evolve to service their growing API lifecycle. This is an area of my research that spans multiple areas of my work, but I tend to house under what I call API orchestration. I try to always step back and look at an evolving area of the tech space as part of the big picture, and attempt to look beyond any individual company, or even the wider industry hype in place that is moving something forward. I see the clear technical benefits of CI/CD, and I see the business benefits of it as well, but I haven’t always been convinced of it as a standalone thing, and have spent the last couple of years trying understand how it fits into the bigger picture.

As I’ve been consulting with several enterprise groups working to adopt a CI/CD mindset, and having similar conversations with government agencies, I’m beginning to see the bigger picture of “continuous”, and starting to decouple it from just deployment and even integration. The first thing that is assumed, not always evident for newbies, but is always a default–is testing. You alway test before you integrate or deploy, right? As I watch groups adopt I’m seeing them struggle with making sure there are other things I feel are an obvious part of the API lifecycle, but aren’t default in a CI/CD mindset, but quickly are being plugged in–things like security, licensing, documentation, discovery, support, communications, etc. In the end, I think us technologists are good at focusing on the tech innovations, but often move right past many of the other things that are essential for the business world. I see this happening with containers, microservices, Kubernetes, Kafka, and other fast moving trends.

I guess the point I want to make is that there is more to a pipeline than just deployment, integration, and testing. We need to make sure that documentation, discovery, security, and other essentials are baked in by default. Otherwise us techies might be continuously forgetting about these aspects, and the newbies might be continuously frustrated that these things aren’t present. We need to make sure we are continuously documenting, continuously securing, and continuously communicating around training, and our continuously evolving (and sharing) our road maps. I’m sure what I’m saying isn’t anything new for the CI/CD veterans, but I’m trying to onboard new folks with the concept, and as with most areas of the tech sector I find the naming and on-boarding materials fairly deficient in possessing all the concepts large organizations are needing to make the shift.

I’m thinking I’m going to be merging my API orchestration (CI/CD) research with my overall API lifecycle research, thinking deeply about how everything from definition to deprecation fits into the pipeline. I feel like CI/CD has been highly focused on the technology of evolving how we deploy and integrate (rightfully so) for some time now, and with adoption expanding we need to zoom out and think about everything else organizations will need to be successful. I see CI/CD as being essential to decoupling the monolith, and changing culture at some of the large organizations I’m working with. I want these folks to be successful, and not fall into the trappings of only thinking about the tech, but also consider the business and political implications involved with being able to move from annual or quarterly deployments and integrations, to where they can do things in weeks, or even days.

API Deployment Templates As Part Of A Wider API Governance Strategy

People have been asking me for more stories on API governance. Examples of how it is working, or not working at the companies, organizations, institutions, and government agencies I’m talking with. Some folks are looking for top down ways of controlling large teams of developers when it comes to delivering APIs consistently across large disparate organizations, while others are looking for bottom ways to educate and incentivize developers to operate APIs in sync, working together as a large, distributed engine.

I’m approach my research into API governance as I would any other area, not from the bottom up, or top down. I’m just assembling all the building blocks I come across, then began to assemble them into a coherent picture of what is working, and what is not. One example I’ve found of an approach to helping API providers across the federal government better implement consistent API patterns is out of the General Services Administration (GSA), with the Prototype City Pairs API. The Github repository is a working API prototype, documentation and developer portal that is in alignment with the GSA API design guidelines, providing a working example that other API developers can reverse engineer.

The Prototype City Pairs API is a forkable example of what you want developers to emulate in their work. It is a tool in the GSA’s API governance toolbox. It demonstrates what developers should be working towards in not just their API design, but also the supporting portal and documentation. The GSA leads by example. Providing a pretty compelling approach to model, and a building block any API provider could add to their toolbox. I would consider a working prototype to be both a bottom up approach because it is forkable, and usable, but also top down because it can reflect wider organizational API governance objectives.

I could see mature API governance operations having multiple API design and deployment templates like the GSA has done, providing a suite of forkable, reusable API templates that developers can put to use. While not all developers would use, in my experience many teams are actually made up of reverse engineers, who tend to emulate what they know. If they are exposed to bad API design, they tend to just emulate that, but if they are given robust, well-defined examples, they will just emulate healthy patterns. I’m adding API deployment templates to my API governance research, and will keep rounding off strategies for successful API governance, that can work at a wide variety of organizations, and platforms. As it stands, there are not very many examples out there, and I’m hoping to pull together any of the pieces I can find into a coherent set of approaches folks can choose from when crafting their own approach.

Narrowing In On My API Governance Strategy Using API Transit To Map Out PSD2

I’m still kicking around my API Transit strategy in my head, trying to find a path forward with applying to API governance. I started moving it forward a couple years ago as a way to map out the API lifecycle, but in my experience, managing APIs are rarely a linear lifecycle. I have been captivated by the potential of the subway map to help us map out, understand, and navigate complex infrastructure since I learned about Harry Beck’s approach to the London Tube map which has become the standard for quantifying transit around the globe.

I am borrowing from Beck’s work, but augmenting for a digital world to try and map out the API practices I study in my research of the space in a way that allow them to be explored, but also implemented, measured, and reported upon by all stakeholders involved with API operations. While I’m still pushing forward this concept in the safe space of my own API projects, I’m beginning to dabble with applying at the industry level, by applying to PSD2 banking, and seeing if I can’t provide an interactive map that helps folks see, understand, and navigate what is going on when it comes to banking APIs.

An API Transit map for PSD2 would build upon the framework I have derived from my API research, applied specifically for quantifying the PSD2 world. Each of the areas of my research broken down into separate subway lines, that can be plotted along the map with relative stops along they way:

  • Definition - Which definitions are used? Where are the OpenAPI, schema, and other relevant patterns.
  • Design - What design patterns are in play across the API definitions, and what is the meaning behind the design of all APIs.
  • Deployment - What does deployment look like on-premise, in the cloud, and from region to region.
  • Portals - What is the minimum viable standard for an API portal presence with any building blocks.
  • Management - Quantify the standard approaches to managing APIs from on-boarding to analysis and reporting.
  • Plans - How are access tiers and plans defined, providing 3rd party access to APIs, including that of aggregators and application developers.
  • Monitoring - What does monitoring of web APIs look like, and how is data aggregated and shared.
  • Testing - What does testing of web APIs look like, and how is data aggregated and shared.
  • Performance - What does performance evaluation of web APIs look like, and how is data aggregated and shared.
  • Security - What are the security practices in place for the entire API stack?
  • Breaches - When there is a breach, what is the protocol, and practices surrounding what should happen–where is the historical data as well.
  • Terms of Service - What does terms of service across many APIs look like?
  • Privacy Policy - How is privacy protected across API operations?
  • Support - What are all the expected support channels, and where are they located?
  • Road Map - What is expected, and where do we find the road map and change log for the platform?

These are just a handful of the lines I will be laying out as part of my subway map. I have others I want to add, but this provides a nice version of what I”d like to see as an API Transit map of the PSD2 universe. Each line would have numerous stops that would provide resources and potentially tooling to help educate, quantify, and walk people through each of these areas in detail, but in the context of PSD2, and the banking industry. This where I’m beginning to push the subway map context further to help make work in a virtualized world, and augmenting with some concepts I hope will add new dimensions to how we understand, and navigate our digital worlds, but using the subway map as a skeuomorph.

To help make the PSD2 landscape I’m mapping out more valuable I am playing with adding a “tour” layer, which allows me to craft tours that cover specific lines, hitting only the stops that matter, bridges multiple lines, and creates a meaningful tour for a specific audience. Here are a handful of the tours I’m planning for PSD2:

  • Introduction - A simple introduction to the concepts at play when it comes to the PSD2 landscape.
  • Provider Training - A detailed training walk-through for anyone looking to provide a PSD2 compliant platform.
  • Provider Certification - A detailed walkthrough that gathers information and detail to map out, quantity, and assess a specific PSD2 API / platform.
  • Executive - A robust walk-through of the concepts at play for an executive from the 100K view, as well as those of their own companies PSD2 certified API, and possibly those of competitors.
  • Regulator - A comprehensive walk through the entire landscape, including what is required, as well as the certification of individual PSD2 API platforms, with real-time control dashboard.

These are just a few of the areas I’m looking to provide tours through this quantified PSD2 API Transit landscape. I am using Github to deploy, and evolve my maps, which leverages Jekyll as a Hypermedia client to deliver the API Transit experience. While each line of the API Transit map has it’s own hypermedia flow for storing and experiencing each stop along the line, the tours also have its own hypermedia flows which can augment existing lines and stops, as well as inject their own text, images, audio, video, links and tooling along the way.

The result will be a single URL which anyone can land on for the PSD2 API Transit platform. You can choose from any of the pre-crafted tours, or just begin exploring each line, getting off at only the stops that interest you. Some stops will be destinations, while others will provide transfers to other lines. I’m going to be investing some cycles into my PSD2 API Transit platform over the holidays. If you have any questions, comments, input, or would like to invest in my work, please let me know. I’m always looking for feedback, as well as interested parties to help fund my work and ensure I can carve out the time to make them happen.

The API Coaches At Capital One

API evangelism and even advocacy at many organizations has always been a challenge to introduce, because many groups aren’t really well versed in the discipline, and often times it tends to take on a more marketing or even sales like approach, which can hurt its impact. I’ve worked with groups to rebrand, and change how they evangelize APIs internally, with partners, and the public, trying to ensure the efforts are more effective. While I still bundle all of this under my API evangelism research, I am always looking for new approaches that push the boundaries, and evolve what we know as API evangelism, advocacy, outreach, and other variations.

I was introduced to a new variation of the internal API evangelism concept a few weeks back while at Capital One talking with my friend Matthew Reinbold(@libel_vox) about their approach to API governance. His team at the Capital One API Center of Excellence has the concept of the API coach, and I think Matt’s description from his recent API governance blueprint story sums it up well:

_At minimum, the standards must be a journey, not a destination. A key component to “selective standardization” is knowing what to select. It is one thing for us in our ivory tower to throw darts at market forces and team needs. It is entirely another to repeatedly engage with those doing the work.

Our coaching effort identifies those passionate practitioners throughout our lines of business who have raised their hands and said, “getting this right is important to my teams and me”. Coaches not only receive additional training that they then apply to their teams. They also earn access to evolving our standards.

In this way, standards aren’t something that are dictated to teams. Teams drive the standards. These aren’t alien requirements from another planet. They see their own needs and concerns reflected back at them. That is an incredibly powerful motivator toward acceptance and buy-in._

A significant difference here between internal API evangelism and API coaching is you aren’t just pushing the concept of APIs (evangelizing), you are going the extra mile to focus on healthy practices, standards, and API governance. Evangelism is often seen as an API provider to API consumer effort, which doesn’t always translate to API governance internally across organizations who are developing, deploying, and managing APIs. API coaches aren’t just developing API awareness across organizations, they are cultivating a standardized, bottom up, as well as top down awareness around providing and consuming APIs. Providing a much more advanced look at what is needed across larger organizations, when it comes to outreach and communication.

Another interesting aspect of Capital One’s approach to API coaching, is that this isn’t just about top down governance, it has a bottom up, team-centered, and very organic approach to API governance. It is about standardizing, and evolving culture across many organizations, but in a way that allows team to have a voice, and not just be mandated what the rules are, and required to comply. The absence of this type of mindset is the biggest contributor to a lack of API governance we see across the API community today. The is what I consider the politics of APIs, something that often trumps the technology of all of this.

API coaching augments my API evangelism research in a new and interesting way. It also dovetails with my API design research, as well as begins rounding off a new area I’ve wanted to add for some time, but just have not see enough activity in to warrant doing so–API governance. I’m not a big fan of the top down governance that was given to us by our SOA grandfathers, and the API space has largely been doing alright without the presence of API governance, but I feel like it is approaching the phase where a lack of governance will begin to do more harm than good. It’s a drum I will start beating, with the help of Matt and his teams work at Capital One. I’m going to reach out to some of the other folks I’ve talked with about API governance in the past, and see if I can produce enough research to get the ball rolling.

Learning About API Governance From Capital One DevExchange

I am still working through my notes from a recent visit to Capital One, where I spent time talking with Matthew Reinbold (@libel_vox) about their API governance strategy. I was given a walk through their approach to defining API standards across groups, as well as how they incentivize, encourage, and even measure what is happening. I’m still processing my notes from our talk, and waiting to see Matt publish more on his work, before I publish too many details, but I think it is worth looking at from a high level view, setting the bar for other API governance conversations I am engaging in.

First, what is API governance. I personally know that many of my readers have a lot of misconceptions about what it is, and what it isn’t. I’m not interesting in defining a single definition of API governance. I am hoping to help define it so that you can find it a version of it that you can apply across your API operations. API governance is at its simplest form, about ensuring consistency in how you do API across your development groups, and a more robust definition might be about having an individual or team dedicated to establishing organization-wide API standards, helping train, educate, enforce, and in the case of capital one, measure their success.

Before you can begin thinking about API governance, you need to start establishing what your API standards are. In my experience this usually begins with API design, but should also quickly also be about consistent, API deployment, management, monitoring, testing, SDKs, clients, and every other stop along the API lifecycle. Without well-defined, and properly socialized API standards, you won’t be able to establish any sort of API governance that has any sort of impact. I know this sounds simple, but I know more API providers who do not have any kind of API design, or other guide for their operations, than I know API providers who have consistent guides to design, and other stops along their API lifecycle.

Many API providers are still learning about what consistent API design, deployment, and management looks like. In the API industry we need to figure out how to help folks begin establishing organizational-wide API design guides, and get them on the road towards being able to establish an API governance program–it is something we suck at currently. Once API design, then deployment and management practices get defined we can begin to realize some standard approaches to monitoring, testing, and measuring how effective API operations are. This is where organizations will begin to see the benefits of doing API governance, and it not just being a pipe dream. Something you can’t ever realize if you don’t start with the basics like establishing an API design guide for your group. Do you have an API design guide for your group?

While talking with Matt about their approach at Capital One, he asked if it was comparable to what else I’ve seen out there. I had to be honest. I’ve never come across someone who had established API design, deployment, and management practices. Were actively educating and training their staff. Then actually measuring the impact and performance of APIs, and the teams behind them. I know there are companies who are doing this, but since I tend to talk to more companies who are just getting started on their API journey, I’m not seeing anyone organization who is this advanced. Most companies I know do not even have an API design guide, let alone measuring the success of their API governance program. It is something I know a handful of companies would like to strive towards, but at the moment API governance is more talk than it is ever a reality.

If you are talking API governance at your organization, I’d love to learn more about what you are up to. No matter where you are at in your journey. I’m going to be mapping out what I’ve learned from Matt, and compare with I’ve learned from other organizations. I will be publishing it all as stories here on API Evangelist, but will also look to publish a guide and white papers on the subject, as I learn more. I’ve worked with some universities, government agencies, as well as companies on their API governance strategies. API governance is something that I know many API providers are working on, but Capital One was definitely the furthest along in their journey that I have come across to date. I’m stoked that they are willing to share their story, and don’t see it as their secret sauce, as it is something that doesn’t just need sharing, it is something we need leaders to step up and show everyone else how it can be done.

Just Waiting The GraphQL Assault Out

I was reading a story on GraphQL this weekend which I won’t be linking to or citing because that is what they want, and they do not deserve the attention, that was just (yet) another hating on REST post. As I’ve mentioned before, the GraphQL’s primary strength seems to be they have endless waves of bros who love to write blog posts hating on REST, and web APIs. This particular post shows it’s absurdity by stating that HTTP is just a bad idea, wait…uh what? Yeah, you know that thing we use for the entire web, apparently it’s just not a good idea when it comes to exchanging data. Ok, buddy.

When it comes to GraphQL, I’m still watching, learning, and will continue evaluating it as a tool in my API toolbox, but when it comes to the argument of GraphQL vs. Web APIs I will just be waiting out the current assault as I did with all the other haters. The link data haters ran out of steam. The hypermedia haters ran out of steam. The GraphQL haters will also run out steam. All of these technologies are viable tools in our API toolbox, but NONE of them are THE solution. These assaults on “what came before” is just a very tired tactic in the toolbox of startups–you hire young men, give them some cash (which doesn’t last for long), get them all wound up, and let them loose talking trash on the space, selling your warez.

GraphQL has many uses. It is not a replacement for web APIs. It is just one tool in our toolbox. If you are following the advice of any of these web API haters you will wake up in a couple of years with a significant amount of technical debt, and probably also be very busy chasing the next wave of technology be pushed by vendors. My advice is that all API providers learn about the web, gain several years of experience developing web APIs, learn about linked data, hypermedia, GraphQL, and even gRPC if you have some high performance, high volume needs. Don’t spend much time listening to the haters, as they really don’t deserve your attention. Eventually they will go away, find another job, and technological kool-aid to drink.

In my opinion, there is (almost) always a grain of usefulness with each wave of technology that comes along. The trick is cutting through the bullshit, tuning out the haters, and understanding what is real and what is not real when it comes to the vendor noise. You should not be adopting every trend that comes along, but you should be tuning into the conversation and learning. After you do this long enough you will begin to see the patterns and tricks used by folks trying to push their warez. Hating on whatever came before is just one of these tricks. This is why startups hire young, energetic, an usually male voices to lead this charge, as they have no sense of history, and truly believe what they are pushing. Your job as a technologist is to develop the experience necessary to know what is real, and what is not, and keep a cool head as the volume gets turned up on each technological assault.

Revisiting GraphQL As Part Of My API Toolbox

I’ve been reading and curating information on GraphQL as part of my regular research and monitoring of the API space for some time now. As part of this work, I wanted to take a moment and revisit my earlier thoughts about GraphQL, and see where I currently stand. Honestly, not much has changed for me, to move me in one direction or another regarding the popular approach to providing API access to data and content resources.

I still stand by my cautionary advice for GraphQL evangelist regarding not taking such an adversarial stance when it comes to the API approach, and I feel that GraphQL is a good addition to any API architect looking to have a robust and diverse API toolbox. Even with the regular drumbeat from GraphQL evangelists, and significant adoption like the Github GraphQL API I am not convinced it is the solution for all APIs and is a replacement for simple RESTful web API design.

My current position is that the loudest advocates for GraphQL aren’t looking at the holistic benefits of REST, and too wrapped in ideology, which is setting them up for similar challenges that linked data, hypermedia, and even early RESTafarian practitioners have faced. I think GraphQL excels when you have a well educated, known and savvy audience, who are focused on developed web and mobile applications–especially the latest breed of single page applications (SPA). I feel like in this environment GraphQL is going to rock it, and help API providers reduce friction for their consumers.

This is why I’m offering advice to GraphQL evangelists to turn down the anti-REST, and complete replacement/alternative for REST–it ain’t helping your cause and will backfire for you. You are better to off educating folks about the positive, and being honest about the negatives. I will keep studying GraphQL, understanding the impact it is making, and keeping an eye on important implementations. However, when it comes to writing about GraphQL you are going to see me continuing to hold back, just like I did when it came to hypermedia and linked data because I prefer not to be in the middle of ideological battles in the API space. I prefer showcasing the useful tools and approaches that are making a significant impact across a diverse range of API providers–not just echoing what is coming out of a handful of big providers, amplified by vendors and growth hackers looking for conversions.

If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.