RSS

API Governance News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API governance converstation. A converstion that is still growing, but has been getting more roots lately as mainstream companies push to adopt APIs.

How Do We Get API Developers To Follow The Minimum Viable API Documentation Guidance?

After providing some guidance the other day on how teams should be documenting their APIs, one of the follow up comments was: “Now we just have to figure out how to get the developers to follow the guidance!” Something that any API leadership and governance team is going to face as they work to implement new policies across their organization. You can craft the best possible guidance for API design, deployment, management, and documentation, but it doesn’t mean anyone is actually going to follow your guidance.

Moving forward API governance within any organization represents the cultural frontline of API operations. Getting teams to learn about, understand, and implement sensible API practices is always easier said than done. You may think your vision of the organizations API future is the right one, but getting other internal groups to buy into that vision will take a significant amount of work. It is something that will take time, resources, and be something that will always be shifting and evolving over time.

Lead By Example The best way to get developers to follow the minimum viable API documentation guidance being set forth is to do the work for them. Provide templates and blueprints of what you want them to do. Develop, provide, and evolve forkable and downloadable API documentation examples, with simple README checklists of what is expected of them. I’ve published a simple example using the VA Facilities API definition published as OpenAPI 3.0 and Swagger UI to Github Pages, with the entire thing forkable via the Github repository. It is very bare bones example of providing API documentation guidance is a package that can be reused, providing API developers with a working example of what is expected of them.

Make It A Group Effort To help get API developers on board with the minimum viable API documentation guidance being set forth, I recommend making it a group effort. Recruit help from developers to improve upon API documentation templates provided, and encourage them to extend, evolve, and push forward their individual API documentation implementations. Give API developers a stake in how you define governance for API documentation–not everyone will be up for the task, but you’d be surprised who will raise their hand to contribute if they are asked.

Provide Incentive Model This is something that will vary in effectiveness from organization to organization, but consider offering a reward, benefit, perk, or some other incentive to any group who adopts the API documentation guidance. Provide them with praise, and showcase their work. Bring exposure to their work with leadership, and across other groups. Brainstorm creatives ways of incentivizing development groups to get more involved. Establish a list of all development groups, track on ideas for incentivizing their participation and adoption, and work regularly to close them on playing an active role in moving forward your organization’s API documentation strategy.

Punish And Shame Others As a last resort, for the more badly behaved groups within our organizations, consider punishing and shaming them for not following API documentation guidance, and contributing to overall API governance efforts. This is definitely not something you should not consider doing lightly, and should only be used in special cases, but sometimes teams will need smaller or larger punitive responses to their inaction. Ideally, teams are influenced by praise, and positive examples of why API documentation standards matter, but there will always be situations where teams won’t get on board with the wider organizational API governance efforts, and need their knuckles rapped.

Making Meaningful Change Is Hard It will not be easy to implement consistent API documentation across any large organization. However, API documentation is often one of the most important stops along the API lifecycle, and should receive significant investment when it comes to API governance efforts. In most situations doing the work for developers, and providing them with blueprints to be successful will accomplish the goal of getting API developers all using a common approach to API documentation. Like any other stop along the API lifecycle, delivering consistent API documentation across distributed teams will take having a coherent strategy, with regular tactical investment to move everything forward in a meaningful way. However, once you get your API documentation house in order, many other stops along the API lifecycle will also begin to fall inline.


API Governance Models In The Public and Private Sector

This is a report for the Department of Veterans Affairs microconsulting project, “Governance Models in Public and Private Sector”. Providing an overview of API governance to help the VA, “understand, with the intention to adopt, best practices from the private and public sector, specifically for prioritizing APIs to build, standards to which to build APIs, and making the APIs usable by external consumers.” Pulling together several years of research conducted by industry analyst API Evangelist, as well as phone interviews with API practitioners from large enterprise organizations who are implementing API governance on the ground across the public and private sector, conducted by Skylight Digital.

We’ve assembled this report to reflect the interview conversations we had with leaders from the space, helping provide a walk through of the types or roles and software architecture being employed to implement governance at large organizations. Then we walk through governance as it pertains to identifying possible APIs, developing standards around the delivery of APIs, how organizations are moving APIs into production, as well as presenting them to their consumers. Wrapping up with an overview of formal API governance details, as well as an acknowledgement that most API governance is rarely ever a fully formed initiative at this point in time. Providing a narrative for API governance, with a wealth of bulleted elements that can be considered, and assembled in the service of helping govern the API efforts across any large enterprise.

Roles Within An Organization

There are many roles being used by organizations who are leading the conversation around the delivery of high quality, industry changing APIs. Defining the personalities that are needed to make change across large organizations when it comes to delivering APIs consistently at scale. While there may be many names for the specific roles leading the charge it is clear that these people are bringing a unique blend of skills to an organization, with an emphasis in a couple of key areas:

  • Leadership - Providing leadership for teams when it comes to APIs.
  • Innovation - A focus on innovation using APIs across the organization.
  • Communication - Facilitating communication across all teams, and projects.
  • Advisory - Acting as an advisor to existing leadership and management.
  • Strategy - Helping existing teams develop, evolve, and realize their strategy.
  • Success - Focusing on helping existing teams be successful when it comes to APIs.
  • Architect - Bringing a wide variety of software architectural skills to the table.
  • Coaching - Being a coach to existing teams, and decision makers across the organization.

Bringing together a unique set of skills that range from the technical to deep knowledge of the business domain, into a concentrated, although sometimes distributed effort to bring change across an organization using APIs. Along with these roles, many large organizations are investing in new types of structure to help develop talent, take charge of new ideas, and move forward the enterprise wide API strategy, with a handful of common characteristics:

  • Labs - Treating API efforts as if it is a laboratory creating new experiments.
  • Center - Making it a center for API thinking, ideation, and for access to information.
  • Centralized - Keeping all efforts in a single group or organization within larger entity.
  • Distributed - Emphasis on keeping API knowledge distributed and not centralized at all.
  • Global - Acknowledging that APIs will need to be a global initiative for larger organizations.
  • Excellence - Focusing on the organization bring excellence to how APIs are delivered.
  • Embedded - Making sure there is API knowledge and expertise embedded in every group.

Combing a unique set of skills and personalities, into a focused organization that takes the reins when it comes to leading API change, and digital transformation at an organization. While many of these efforts emphasize a center, or centralized presence, many are also realizing the importance of embedded and distributed approach, ensuring that talent, and ideas grow within existing teams, and are not seen as just some new, external, isolated group or initiative.

There clearly is not a single role or organization structure that brings success to API efforts at scale across the enterprise, however there are clear patterns being applied in the early stages that can be emulated. Helping ensure that API knowledge and expertise is available and accessible by all groups across an organization, and all its geographic regions, ensuring that the entire enterprise is part of the conversation and moving forward in unison.

Software Architecture Design

Governance is all about shaping and crafting the way we design and architect software, leveraging the web, and specifically web APIs to help drive the web, mobile, device, and network applications we depend on. There are a number of healthy, and not so healthy patterns across the landscape for considering as we look to shape and transform our software architecture, begin honest about the forces that influence what software is, does, and what it will ultimately become.

Domain Awareness

Software architecture is always a product of its environment, being influenced by a number of factors that already exist within any given domain. We are seeing a number of factors influence how large enterprises are investing and defining their software architecture. Here are a handful of the top areas of consideration when it comes to how the domain an enterprise exists within impact architecture:

  • Resources - The types of digital resources an enterprise already possess will drive software architecture, defining how it works, grows, expands, and shifts.
  • Schema - Existing schema define how data is stored, and often gathered and syndicated–even if this is abstracted away through other systems, it is still influencing architectural decisions at all levels.
  • Process - Existing business process are already in motion, driving current architecture, and is something that cannot immediately be changed, without having echoes of impact on future architectural decisions.
  • Industry - External industry factors are always emerging to shift how software architecture is crafted, providing design, development, and operational factors that need to be considered as architecture is refactored.
  • Regulatory - Beyond more organic industry influences, there are regulatory, legal, and other government considerations that will shift how software architecture will ultimately operate.
  • Definitions - The access and availability of machine readable definitions, schema, process, and other guiding structural elements that can help make software architecture operate more efficiently, or less efficiently in the absence of standardization, and portability.

Domain expertise, awareness, and structure will always shape software architecture, and the decision making process that surrounds it. Making it an imperative for there to be an investment in internal capacity as well as leveraging external expertise and vendors when it comes to shaping the enterprise architectural landscape. Without the proper internal capacity, domain knowledge can be minimized, weakening the overall architecture of the digital infrastructure nutrients an enterprise will need to move forward.

Legacy Considerations

We can never escape the past when it comes to the software architectural decisions we make, and it is important that we don’t just see legacy as a negative, and also view legacy as a historical artifact that should move forward. Maybe not always the same legacy code should be in forward motion, but the wisdom, knowledge, and lessons learned around the enterprise legacy should be on display. Here are a handful of the legacy considerations we’ve identified through our discussions.

  • Systems - Existing systems in operation have a significant influence over all current and future architectural decisions, making legacy system consideration a top player when it comes to decision making around software architecture conversations.
  • People - Senior staff who operate and sustain legacy system, or were around when they were developed possess a significant amount of power when it comes to influencing any new system architecture, and what gets invested in or not.
  • Partners - External partners who have significant history with the enterprise possess a great deal of voting power when it comes to what software architecture gets adopted or not.
  • Trauma - Legacy trauma from historical outages, breaches, and bad architectural decisions will continue to influence the future, especially when legacy teams still have influence over future projects.

Systems, people, partners, and bad decisions made in the past will continue to drive, and often times haunt each wave of software architectural shifts. This influence cannot be ignored, abandoned, and needs to be transformed into positive effects on next generation investment in software architecture. Change will be inevitable, and legacy technical and cultural debt needs to be addressed, but not at the cost of repeating the mistakes of the past.

Contemporary Considerations

After legacy concerns, we all live in the reality have been given, and it is something that will continue to shape how we define our architecture. Throughout our discussions with companies, institutions, and government agencies regarding the challenges they face, and the current forces that shape their software architecture decisions, we found several recurring themes regarding contemporary considerations that were making the largest impact:

  • Talent Available - The talent available for designing, developing, and deploying of API infrastructure dictates what is possible at all stages.
  • Offshore Workers - The offshoring of work changes the governance dynamics, and requires strong processes, and a different focus when it comes to execution.
  • Mainstream Awareness - Keeping software architectural practices in alignment with mainstream practices helps shape software architecture decisions, allowing them to move forward at a healthier pace.
  • Internal Capacity - It has been stated several times that doing APIs at scale across the enterprise would not be possible without investing in internal capacity over outsourcing, or depending on vendor solutions.

Modern practices continue shaping how we deliver our software architecture, defining how we govern the evolution of our infrastructure, and find the resources and talent to make it happen. Keeping software architecture practices in alignment with contemporary approaches helps streamline the road map, how teams work with partners, and can outsource and work with external entities to help get the job done as efficiently as possible.

Technically Defined

The technology we adopts helps define and govern how software architecture is delivered, and evolved. There are many evolution trends in software architecture that has moved the conversation forward, allowing teams to be more agile, consistent, and efficient in doing what they do. As we studied the architectural approaches of leading API providers across the landscape, and engaged in conversations with a handful of them, we found several technologically defined views of how software architecture is influencing future generations and iterations.

  • Vendors - Specific vendors have their owning guiding principles to how software architecture gets defined, delivered, and governed. Often times given an outsized role in dictating what happens next.
  • Frameworks - Software and programming language frameworks dictate specific patterns, and govern how software is delivered, and lead the conversation on how it evolves. Software frameworks can possess a significant amount of dogma that will have a gravity all its own when it comes to evolving into the future.
  • Cloud Platforms - Amazon, Google, and Microsoft (Azure) have a strong voice in how software architecture is defined in the current climate, providing us with the services and tooling to govern the lifecycle. This control over how we define our infrastructure is only going to increase with their market dominance in the digital realm.
  • Continuous Integration / Deployment - CI/CD services and tooling have established a new way of defining software architecture, and establishing a pipeline approach to moving it forward, building in the hooks needed to govern every step of its evolution. Reducing the cycles from annual down to monthly, weekly, and even daily cycles of change.
  • Source Control - Github, Gitlab, and Bitbucket are defining how software is delivered, providing the vehicle for moving code forward, the hooks for governing each commit, and step forward any infrastructure makes as it is versioned, and evolved.

These areas are increasingly governing how we design, develop, deploy, and manage our infrastructure. Providing us with the scaffolding we need to hang our technological infrastructure on, and gives us the knobs and levers we can pull to consistently orchestrate, and move forward increasingly complex and large enterprise software infrastructure, across many teams, and geographic regions. The decisions we make around the technology we use will stick with us for years, and continue to influence decisions even after it is gone.

Business Defined

When it comes to delivering software architecture, not everything is governed by the technical components, and much of what gets delivered and moved forward will defined by the business side of the equation. The amount of investment by a business into its overall IT, as well as more progressive groups, will determine what gets done, and what doesn’t. With the following elements of the business governing software architecture in several cases::

  • Budgets - How much money is allocated for a team to work when it comes to defining, deploying, managing, and iterating upon software architecture.
  • Investors - Many groups are influenced, driven, and even restricted by outside investors, determining what software architecture is prioritized, and even dictating the decisions around what is put to work.
  • Partners - External partners with strong influence over the business discussions that drive software infrastructure decision play a big role in the governance, or lack of governance involved.
  • Public Image - Often times the decisions that go into software architecture, and the governance of how it moves forward will be driven by the public image concerns around company, and its stakeholders.
  • Culture - The culture of a business will drive decisions being made when it comes to developing, managing, and governing software architecture, which can be more challenging to move forward than the technology in many cases.

The governance of software architecture has to be in alignment with business objectives of an enterprise. Many groups choose to begin their API journeys based upon trends, or the desire of a small group, and have encountered significant friction when trying to bring in alignment with the wider enterprise business objectives. Groups that addressed business considerations earlier on in their strategy have done much better when it came to reducing friction, and eliminating obstacles from their road map.

Observability

Almost every discussion we’ve had around governance of software infrastructure has included mentions of the importance of observability across next generation iterations. Software designed, delivered, and supported in the darkness or in isolation either fails, or is destined to become the next generation of technical debt. There were several areas of emphasis when it came to ensuring the API driven infrastructure sees the light of day from day one, and continues to operate in a way that everyone involved can see what is happening.

  • Out in Open - Groups who operate out in the open, sharing their progress actively with other teams, and encouraging a transparent process find higher levels of success, adoption, and consistency across their architectural efforts.
  • Co-Discovery - Ensuring that before work begins, teams are working together to discovery new ideas, learn about alternative software solutions, and working together to create buy-in, and ultimately make decisions around what gets adopted.
  • Collaborative - While identified as sometimes being slower than traditional, more isolated efforts, teams who encouraged cross-team collaboration saw that their architectural decisions were sounder, more stable, and had more longevity.
  • Open Source - Following open source software development practices, and working with existing open source solutions helps ensure that enterprise software architecture last longer, has more support, and follows common standards over other more proprietary approaches.
  • Publicly - When it makes sense from a privacy and security standpoint, groups often articulate that being public by default helps ensure project teams behave differently, enjoy more accountability, and often attract external talent, domain expertise, and publicly opinion along the way.

Enterprise organizations that push for observability by default find that teams tend to work better together, and have a more open attitude. Attracting the right personalities, encouraging regular communication, and thinking externally by default, not as something that happens down the road. Bringing much needed sunlight and observability into processes that can often be very complex and abstract, and pushing things to speak to a wider audience beyond developer and IT groups.

Shared Process

Having a shared process that can be communicated across teams, going beyond just technical teams, and is something that business groups, partners, 3rd party, and all other stakeholders can follow and participate in, is a regular part of newer, API-centric, software delivery life cycles. Possessing several core elements that help ensure the process for defining, designing, delivering and evolving software architecture is shared by all.

  • Contract - Crafting, maintaining, and consistently applying a common machine readable contract that is available in YAML format, is a common approach to ensuring there is a contract that can be used across all architectural projects, defining each solutions as an independent, business service.
  • Pipeline - Extending the machine readable service contracts with YAML defined pipeline definitions that ensure all software is delivered in a consistent, reproducible manner across many disparate teams.
  • Versioning - Establishing a common approach to versioning code, definitions, and other artifacts, providing a common semantic approach to governing how architecture is evolved in a shared manner that everyone can follow.

Historically the software development and operation lifecycle is owned by IT, and development groups. Modern approaches to delivering software at scale is a shared process, including internal business and technical stakeholders, while also sharing the process with external partners, 3rd party developers, and the public. Bringing software architecture out of the shadows, and conducting it on the open web, making it more inclusive amongst all stakeholders, but done in a way that respects privacy and security along the way.

Identifying Potential APIs

Once the architectural foundations have been laid, there are many ways in which large enterprises begin identifying the potential APIs that should be designed, deployed, and evolved supporting the many applications that will be depending on the underlying platform architecture. Depending on the organization and it’s priorities, the reasons for how new APIs are born will vary, resulting in different lifecycles, and resulting services being delivered across internal groups, partner stakeholders, and 3rd party developers.

Throughout this research, we identified that there is no single approach to identifying which APIs should be delivered, but we did work to understand a variety of approaches in use across the landscape, and by the practitioners interviewed for this project. Establishing some common areas for answering the questions around what should be an API, why are we doing APIs, leading us to the how of doing APIs, and uncovering the pragmatic reasoning behind web, mobile, device, and other applications that APIs are driving across the landscape.

Existing Realities

Our existing realities drive the need for APIs, and reflect where we should be looking to provide new services for internal stakeholders, partners, and potentially as new revenue streams for 3rd party developers. While some APIs may be entirely new solutions, it is most likely that APIs will be born out of the realities we are already dealing with on a daily basis, based upon the digital solutions we depend on each day. We identified the most common realities that enterprise group face when it comes to their present day digital transformation challenges.

  • Database - The existing databases in operation are the number one place groups are identifying potential resources for deployment of APIs. Exposing historically accumulated digital assets using the web, and making them available for use in new applications, to partners, and driving new types of data products for generating the next generation of revenue streams.
  • Website - Our existing websites reflect the last 20 years of the digital evolution of our enterprises, representing the digital resources we’ve accumulated and identified as being important for sharing with partners and the public. HTML representations of our digital assets are always the 101 of API deployment, and understand what should also be available as JSON, and other more sophisticated representations.
  • Integrations - Existing software integrations, system to system integrations, are represent a rich area for understanding how digital resources are delivered, accessed, and made available throughout existing applications and systems. Providing an important landscape for mapping out when understand what should be turned into APIs, having a more consistent process applied, eliminating custom integrations, and standardizing how systems speak with one another.
  • Applications - Exiting web, mobile, desktop and other applications can have a variety of backend system connectivity solutions, but also might have a more custom, bespoke approach to doing APIs that is off the radar when it comes to governing how infrastructure evolves. Providing another rich area for mapping out the connections behind the common applications in use, understanding the internal, partner, and 3rd party APIs and other connections they use.
  • Services - APIs have been around for a while, and exist in a variety of formats. Legacy web services, RPC, FTP, and other API and messaging formats should be mapped out and included as part of the potential API evolutionary landscape. Taking existing services, and evolving them in a consistent manner with all other API-driven services, leveraging web technologies to consistently deliver and manage digital resources.
  • Spreadsheets - The majority of business in the world still happens within the spreadsheet. These portable data stores are emailed, shared, and spread around the enterprise, and represent a rich source of information when it comes to understanding which resources should be published as APIs.

You can’t govern what isn’t mapped out and known. It becomes increasingly difficult to govern software infrastructure that exists across many open and proprietary formats, and delivered as custom one-off solutions. Governance begins with a known landscape, and the greatest impedance to functional governance across organizations are the unknowns. Not knowing a solution exists, or its architectural approach being something that isn’t part of the bigger picture, leaving it to be a lone actor, in a larger landscape of known services operating in concert.

Public Presence

Another reason for having an open, very public approach to selecting, delivering, and operating software infrastructure, is that it establishes a public presence, across web properties, social networks, and other platforms where enterprise organizations can build community. There are a number of ways to identify potential new API resources by just being public, engaging with the community, and establishing API delivery life cycles that involve having a public presence.

  • User Requests - Actively soliciting web, mobile, and developer feedback, both internally and externally is a great way to learn about potentially new API resource opportunities. Leveraging existing users as a source of insight when it comes to what services would make applications better, and demonstrating the importance of investing in an API literate user base.
  • Partner Requests - Actively working with partners, conducting regular meetings regarding digital assets and transformation, seeking feedback on what types of services would improve upon existing solutions, and strengthen partner relations. Investing in existing partners, and using them as a way to evolve the API road map, and increase their dependency on enterprise resources.
  • Public Feedback - Engaging with the public via websites, social networks, forums, and at events, to understand what opportunities are out there when it comes to delivering new API resources. Tapping public awareness around specific topics, within particular domains, and considering suggestions for new APIs outside the enterprise firewall and the traditional box.
  • Media Coverage - Tuning on to popular tech blogs, mainstream media, and other public outlets to study and understand what opportunities are emerging for the delivery of new API services. Tuning into popular trends when it comes what is happening with APIs, or business sectors that might not have caught up to some of the modern approaches to delivering APIs.
  • Feedback Loops - Cultivating trusted feedback loops with existing users, social networks, and private messaging platforms. Investing in long term feedback loops that tap the knowledge and domain expertise of trusted professionals, who can bring fresh ideas to the table when it comes to which new APIs can benefit the enterprise.
  • Negative Consequences - One significant thing to note with having a public presence, is that not everything will be positive. There are serious privacy, security, and safety concerns when operating your APIs out in the public, and there should be plenty of consideration for how to do it in a healthy way. Acknowledging that not everyone in the public domain will have the enterprise’s best interest in mind.

It isn’t easy soliciting feedback from the general public when it comes to determining the direction a platform road map should head. However, with some investment, curation, and cultivation, a more reliable source of insight regarding the direction of an API platform can be established. The API community across the public and private sector has grown significantly over the last decade, providing a wealth of knowledge talent that can be tapped, if you know where to look for it.

Improvements

Moving beyond the “where to look for opportunities for APIs”, and moving into the why to find, as well as how to prioritize which resources should be turned into APIs, we hear a lot about investing in the overall improvement of the enterprise when it came to the motivations behind governance. Looking at many of the common incentives behind doing APIs, but also doing it in a more consistent and scalable way that supports the mission and forward motion of the enterprise.

  • Optimization - Seeking to optimize how applications are delivered, and services provided across teams. Providing consistent services that can be used across many internal groups, across partners, and that will improve the lives of 3rd party developers.
  • Common Patterns - Doing APIs, and pushing for governance to help identify common patterns across how software is designed, delivered, and managed. Working to extract the existing patterns in use, help standardize and establish the common patterns, and reenforce their usage across teams, and distributed groups.
  • Reusability - Encouraging reusability is the number one improvement we heard from different groups, and see across the landscape. Governing how software is not just delivered, but maximized, reused–ensuring the enterprise is maximizing software spend, as well as the revenue from services it delivers.
  • Acceleration - Investing in governance to help accelerate how applications are delivered, measuring, standardizing, and optimizing along the way to improve existing efforts, as well as new projects on the horizon. Increasing the speed of not just new services, but how the services are able to be put to work by developers and integrators.
  • Efficiency - Setting into motion patterns and processes that increase overall efficiency around how services are delivered, and how they enable teams to deliver on new projects, applications, integrations. Allowing IT, developers, and business users to benefit from an API focus.
  • Flexibility - Increasing the flexibility of how applications operate, and teams are able to work together and deliver across the enterprise. Encouraging the design and development of APIs that work together, and flexibly achieve organizational objectives.

Providing a set of criteria that can be used to help prioritize which APIs get identified for delivery, and for evolution and versioning. If API resources help deliver on any of these areas, their benefit to the enterprise is increased, and it should be bumped up the list when looking for API opportunities. Always looking for how the enterprise can be improved upon, while also understanding which specific resources should be targeted for wrapping, and exposing as simple web APIs that can be used for both internal and external use cases.

Challenges

The API journey is always full of challenges, and these areas of friction should be identified, and incorporated into the criteria for identifying new API solutions, as well as determining which APIs should be invested in and evolved. While some challenges can be minimized and overcome, many can also cause unnecessary friction throughout the API roadmap, making challenges something to be considered when putting together any API release strategy.

  • Education - What education is required when it comes to acquiring the resources behind any potential API, developing and standing up a proper API, as well as deploying, managing, and supporting an API. What challenges can be foreseen, and identified early on, helping weigh what investment in education, training, and learning along the API journey for any set of services.
  • Maturity - Understanding early on, and putting together a plan on what maturity will look like for any service, acknowledging that every service will begin in a juvenile state, and take time to harden, mature, and become something that is dependable, reliable, and usable in a production environment.
  • Isolation - Identifying resources that are being developed, maintained, and operated in isolation, and working to move them out into the mainstream. While also ensuring that any new services being developed avoid isolation, and are developed, evolved, and managed out in the open, ensuring that services never operate alone.
  • Management - Including management in discussions around which resources should be developed into APIs, including leadership in all conversations involving the targeting, evolution, and even deprecation of API services. Ensuring that the prioritization of API development is always on the radar of management, and there is a general awareness regarding the delivery of services.
  • Consistency - Realizing that while consistency is the goal, it may be an elusive, non-stop chase to actually realize consistency across teams. It should be a goal, but also realistically understanding that it won’t be easy to achieve, and while we want to strive for perfection, sometimes there will be good enough achieved for some services.
  • Reusability - Similar to consistency, reusability is an obvious goal, and should be worked towards, but it will also be elusive, and not always reliably achieved over time. There might still be redundancy for some services, and overlapping aspects of delivering services, while some areas reusability will be achievable.
  • Build It And They Will Come - There has been a significant amount of reflection regarding targeting, developing, and publishing APIs that were not needed based upon an “if you build it, they will come” mentality–where most often, nobody came, and the work was in vain.

Challenges are a fact of life in the delivery of software, and evolving complex systems using APIs. Identifying challenges should be a natural part of targeting resources for delivery as APIs. Challenges can increase fiction when delivering service, and should be carefully evaluated before tackling the development of any new services. It is easy to identify the potential of new APIs, but it takes a more seasoned eye to understand potential challenges.

New ideas for APIs will become numerous once you begin looking. Based upon existing resources, applications, and the feedback of internal groups, partners, and the public. Along with all of the possibilities that come along, a standardized, pragmatic approach to understanding the potential, value, as well as challenges with each potential idea should be part of the equation.

Defining Data Models & Standards

To help realize and deliver upon governance at scale it will take heavy investment in standardizing data models, and incorporating existing patterns and standards throughout the API delivery lifecycle. Many enterprise API development groups are streamlining and standardization the delivery of APIs through the adoption and development of standards across operations, which is something that is also contributing to adoption, integration and removing friction for application developers.

The adoption of common data models, interfaces, media types, and web standards helps contribute to the delivery of consistent APIs at scale, but they can also prove to be a challenge for some teams, and even been seen as a threat by others. There are a number of ways in which teams are pushing for standardization across their operations, and helping achieve more consistency, reuse, and the desired results across operations. Reflecting one of the strengths of web APIs, in that they employ web standards to achieve wider adoption, and the delivery of valuable resources at web scale.

Core Definitions

A suite of approaches have emerged in the last decade for designing, developing, evolving, and applying common API patterns across the API lifecycle. These standardized approaches to defining and delivering APIs, using common machine readable specifications, and widely used patterns, have become central to API governance discussions. Providing the fuel for the growth of the API sector to serve mobile applications, as well as the growth of other emerging channels like voice, bot automation, and the connecting of everyday objects to the net. Helping the enterprise get more organized about how services are delivered across the organization at scale.

  • Resource-Defined - RESTful design patterns have provided a simple approach to taking corporate resources and defining them as an intuitive, reusable, potentially web-scale stack of API resources that can be used across a variety of applications. REST provides a philosophy that can be adopted across the enterprise to help organize digital resources as a reusable stack of resources that can be discovered and put to use across many channels.
  • Schema-Driven - JSON Schema is being used to take a variety of schema and standardize them for use in RESTful API resource delivery. Providing a reusable blueprint that can be used across the request and response model for all APIs. Deriving, and standardizing existing schema in use, and making available for usage in in newly developed, and evolving APIs allows for teams to achieve many of the objectives set out as part of modern API strategies.
  • Domain Driven - The business domain is used across the enterprise for guiding the identification, development, evolution, and standardization of a variety of API definitions in use across the enterprise. Lines of business, industry definitions, and a focus on the domain helps establish areas of concern, and the separation of services, allowing for the decoupling of enterprise resources used across systems, but working in unison to deliver a single set of business objectives.
  • Legacy Abstraction - Continue movements to decouple, redefine, and evolve legacy systems is pushing forward the identification of common patterns, and pushing to map, transform, and give them new life as newer web APIs. Taking legacy databases, system interfaces, and distilling the wisdom that exists across them, to help drive the development of common standards.
  • Vocabularies - API development teams are establishing common vocabularies based upon the standardized language already in use, but also essentially taking the slang that is used in bespoke systems and helping tame it, and add it to the common lexicon when it makes sense. Providing a standard language that can be used across the enterprise to talk about services, resources, and digital assets.
  • Discovery - Many groups expressed challenges around standards not seeing the desired adoption because other teams could not find existing schema, definitions, and other existing standards. Emphasizing the importance of comprehensive, actively maintained, and evangelized catalog of core definitions across the enterprise. Providing a single, or distributed location where everyone can find and publish their common definitions.

The definitions coming out of existing API development efforts are being organized into catalog, and discovery systems that can be used to guide governance efforts. Mapping out the known landscape across the enterprise, and turning it into the common patterns that can be reused across the design, development, and operation of the next generation of APIs. Distilling down the essence of the enterprise so that it can become the building blocks of an API program, while also allowing each stop along the lifecycle to be quantified, measured, and considered as part of a wider governance strategy.

Doing Business On The Web

APis are built on the web. They use, and benefit from over 25 years of the evolution of the web. There are a number of elements to consider when working to identify and define common standards for use across the governance of any API program. While the API strategy should be rooted in definitions derived from the core of the enterprise, secondarily it should be embracing the web, and employing common patterns that make the web work to use as the foundation for delivering APIs.

  • Web Standards - The web is the foundation for the delivery of APIs. Most APIs will use HTTP as a transport, and be employing URLs, HTTP verbs, headers, parameters, and other common web standards. Web standards should be part of any governance strategy to help establish common patterns and definitions for use across operations.
  • Media Types - Media types are a fundamental part of the web, and help establish message formats that will be widely recognized outside the enterprise, encouraging the reuse and adoption of APIs that employ common media types. Allowing consumers to negotiate the format that makes the most sense to their team, and the types of applications they are looking to develop.
  • Industry Schema - Industry level schema are emerging and maturing for use across API operations. Specifications like FHIR, PSD2, and other schema, along with API design patterns are evolving to help support industry focused API operations, while encouraging reuse and interoperability across disparate groups.
  • Open Source - The usage of open source software, tooling, specifications, and processes are helping deliver on the API vision across the enterprise. Web APIs reflect the open source ethos, and plays well with the delivery of web APIs. Encouraging reuse, adoption, and bringing the observability necessary to help APIs succeed.

APIs are all about doing business on the web. The web provides the platform in which any API program will operate. When it comes to defining schema, standards, and common patterns for use across API operations, the web is always the beginning of the conversation. While enterprise defined patterns will always be front and center, the standards used to operate the web should always trump localized definitions, and be given priority whenever possible. Don’t reinvent the wheel when it comes to the web, always reuse and implement what is already known.

Under The Influence

When learning about new standards, and considering which standards to adopt, it can be easy to find yourself under the influence of specific vendors, competing standards, programming communities, and other factors. Careful evaluation of standards is important, and an awareness of what some of the common elements are that may shift your opinions one way or another, or even obfuscate what is real and prevent you from achieving objectives.

  • Caught in Trends - Avoid getting caught up in the trend cycles that can often make it difficult to understand the hype around specific specifications. Do your research, understand best practices and adoption levels, and make the sensible decisions around the impact to your own efforts.
  • External Entities - While engaging with external entities, understand what their priorities are when it comes to standards and specifications. Consider what affinity may exist between enterprise objectives, and any external entity that is engaged with, and makes sure there is the right alignment, and influences are pushing efforts in the right direction.
  • Internal Demands - Similar to external entities, understand what the internal teams priorities are and don’t always assume internal requests will have the overall enterprise objectives in mind. Fully understanding what the awareness and motivation are around the implementation of specific standards, and how the fit into the overall strategy.
  • Feedback Loops - Ensure that feedback looks are diverse, and provide a wealth of opinions around what types of standards and specifications should be supported, providing the widest possible view of the landscape when it comes to adoption and investment.
  • Organic Change - Keeping an eye on vendor induced standards adoption over a more organic approach to the growth of standards, internally, as well as the outside community. Working to understand when a standard is artificially inflated, amplified, for alternative objectives beyond its core mission.

There are plenty of currents to get caught up in when it comes to identifying, defining, and evolving standards. Not all will bear fruit, or realize the type of adoption they need to be successful. Establishing a balanced view of the landscape across internal, and external actors, while keeping counsel with a diverse set of voices can help ensure you understand which API specifications, standards, and definitions will help move the enterprise forward.

Taking The Lead

While there are a number of ready to use standards available for the web, and organically grown out of the API community, these standards won’t always find their way into the enterprise. Leading organizations demonstrate that it takes a structured effort to define, disseminate, educate, and evolve standards across large organizations, with a number of proven tactics for taking the lead when it comes to standardizing API infrastructure across the enterprise.

  • Workshops - Organizing, conducting, and growing the number of workshops held to introduce individuals across many teams to a variety of common standards and specifications.
  • Discussions - Formalizing discussions around emerging standards, and those that are in use, to help push forward awareness, and adoption of standardized approaches across groups.
  • Collaboration - Push teams to work together when it comes to sharing the standards in use, showcasing the investment they’ve made, and working together to understand the tooling, services, and standards being used.
  • Event Storming - Putting event storming, a rapid, lightweight group modeling technique to help accelerate the identification, evolution, and adoption of standards that meet specific team’s needs.
  • Influencers - Identifying, investing in, and cultivating influencers who exist within current groups, and encourage them to evangelize and help spread the good word about standards across the enterprise.
  • Ask Questions - Always be asking questions about the standards, or lack of standards in use across the enterprise, pushing the conversation forward at all times when it comes to standards.
  • Challenge Assumptions - Making sure teams don’t get complacent, and the status quo is always being challenged, and that the internal domain should always be rising to a higher level of standardization whenever possible.

It takes standards bodies to move forward common standards at the web and industry levels, and it takes the same approach to push forward the adoption and usage of standards within the enterprise. Leading enterprise organizations are able to quantify, measure and evolve the infrastructure in a more organized way through the adoption of common schemas, specifications, and standards. Providing a common vocabulary for all teams to use when designing, deploying, and managing services that can be used consistently across the enterprise, and its public interests.

Development to Production

After understanding the roles needed to realize governance, more about the underlying platform architecture that is needed, how organizations can identify where the API opportunities are, and making sure groups are putting standards to work, we scrutinized how groups are moving APIs from development to production in a more structured way. Governing how teams are efficiently moving APIs from idea and design, to actually putting services to work in a production environment at scale across large teams. Documenting the lifecycle of a service, and the common elements of how enterprises are getting the job done on a regular basis.

Well Defined

To be able to deliver APIs at scale in a consistent way teams are relying on a well honed, well defined lifecycle that has been defined, proven, and evolved by senior teams. Forcing structure and rigor throughout the evolution of all services, putting governance in front of teams, and forcing them to execute in a consistent way if they expect to reach a production reality with their services. Focusing on a handful of structured formats for imposing governance at the deployment levels.

  • Contract - Requiring ALL services begin with a machine readable OpenAPI contract defining the entire surface area of the API and its schema. Leveraging the contract as the central truth for what the service will deliver, and how governance will be measured throughout the lifecycle of the service.
  • Process - Providing a well defined process for all developers laying out how any service moves from design to production, with as much detail regarding each step along the way. Helping all developers understand what they will face as they work to move their services forward in the enterprise.
  • Scorecard - Having a machine readable checklist and scorecard, with tooling to help each developer fork or establish an instance for their service. Providing a checklist of everything they need to consider, that allows them to check off what has been done, what is left to be done, and providing a definition that can be used to define and report upon governance along the way.
  • Cycles - Provide a variety of cycles that every service will need to go to before they will be production worthy, forcing developers to iterate upon their services, harden and mature them before they will be considered ready for production.
  • Reviews - Require all services go through a series of lifecycle reviews by other teams, pushing service owners to present their work to each review team, and work with them to satisfy any concerns, and make sure it meets all governance criteria.
  • Clinics - Providing developers with a variety of clinics where they can receive feedback on their work, improve upon their service, and improve the health of their work before submitting it for inclusion in a production environment.

Enterprise organizations that provide structure for API development teams find it much easier to realize their governance aspirations. The scaffolding is already there to think about the consistency of services, and the face to face, and virtual scrutiny of services helps provide the environment for governance practices to be executed, enforced, and evolved before any service reaches a production state. A well defined API deployment lifecycle will help contribute to a well defined API governance practice.

Virtualization

One sign of enterprise groups who are further along in their governance journeys is when there are virtualized environments being put to use. Requiring all API developers to mock and iterate upon their APIs in a virtualized environment, presenting them as if they are real, before they ever are given a license to write any code, let alone reach a production state for their services.

  • Mocking - Creating mock APIs for all endpoints, virtualizing every aspect of the surface area of an API, allowing a service to be iterated upon early on in its lifecycle.
  • Data - Requiring virtualized and synthesized data be present for all mocked APIs, returning realistic data with responses, reflecting behavior encountered in a production environment.
  • Sandbox - Providing a complete labs and sandbox environment for developers to publish their mocked APIs into, reflecting what they’ll encounter in a production environment, but done in a much more safer and secure way.

Virtualized environments provide an important phase in the journey for APIs moving from concept to reality. Establishing a safe environment for developers to iterate upon their work, encounter many of the challenges they’ll face in a public environment, without any of the potential for harm to users or the platform. Ensuring that when a service is ready for development, most of the rough edges have been worked out of the service contract, and for the team behind.

Technology

One of the most significant ways in which we’ve found enterprise groups governing the evolution of their APIs is through the technology they employ. This technology is providing much of the structure and discipline that organizations are depending on to help ensure that APIs are being developed, and ultimately deployed in a consistent manner. Bringing most of the governance to the table for some organizations who haven’t begun moving their governance strategy forward as a formal approach.

  • Authentication - Requiring standard-based approaches to authentication using Basic Auth, API keys, OAuth, and JWT. Ensuring the teams understand when to use which protocol, and how to properly configure, and use as part of larger API governance strategy.
  • Framework - Relying on the programming frameworks in use to inject discipline into the process, dictating the governance of how APIs are delivered before they are ready for a production environment.
  • Gateway - Applying the policies and structure necessary to govern API services as they are made available in a production environment. Many groups also had a sandbox or development edition of their gateway emulating many of the same elements that will be found in a production world.
  • Management - Similar to the gateway, groups are relying on their API management layers to help govern what APIs do, providing transformations, policy templates, and a wide variety of other standardization that occurs prior to being made available in production sense.
  • Vendor - The reliance on technology to deliver governance at the API deployment level gives a lot of control to vendors when it comes to governing the API lifecycle. If a vendor doesn’t provide a specific way of doing things, it may not exist within some groups. Dictating what is governance for many enterprise groups.
  • Tooling - Most groups have an arsenal of open source, and custom developed tooling for helping push code from development to production, validating, scanning, transforming, shaping, and hardening code and interfaces to be ready for production usage.
  • Encryption - Requiring encryption by default for storage, and in transport, using technology to ensure security is a default parameter for everything that is exposed publicly. Reducing the possibility of a breach, and minimizing the damage when one does occur.

Demonstrating how important the technological choices we make, and the architectural planning we discussed earlier is to the overall API governance conversations. The services, tooling, and applications we adopt will either contribute, or will not contribute to our governance practices. Potentially enforcing governance for all APIs as they move from development to production, in a way that teams cannot circumvent, and often times don’t even notice is occurring behind the scenes.

Orchestration

Augmenting the core technology, there are a number of orchestration practices we found that help quantify and enforce governance on the road from development to production. Dictating how code, artifacts, and other elements included as part of the API lifecycle move forward, evolve, or possibly get held back until specific requirements are met to meet wider governance criteria.

  • Pipeline - CI/CD services, tooling, and mindset have introduced a pipeline workflow for many teams, standardizing the API delivery process as an executable, repeatable, measurable pipeline that can be realized across any team.
  • Stages - The defining of clear stages that exist after development, but before production, requiring quality assurance, security reviews, compliance audits, and other relevant governance practices to be realized.
  • Hooks - Well defined pre and post commit hooks for all service repositories, requiring that governance is applied throughout a service’s pipeline, and are default for all services, no matter which organization they emerge from.
  • Devops - Pushing that all teams are competent, and skilled enough to execute on behalf of their services from beginning to end, owning and executing at every stage of the life cycle. Reducing the need for reliance on special teams, and eliminating bottlenecks.
  • Logging - Identifying the logging capabilities of the entire stack for each service being delivered. Making sure logging is turned on for everything, and all logs are shipped to a central location for auditing, and when possible real time analysis and response.

Orchestration provides some clarity on the automation side of moving services from development to production, while also enforcing governance along the way. Allowing for an assembly line delivery of consistent services, and the iteration of each version, in alignment with the overall governance strategy. Reducing the chance for human error, and increasing the chance for consistent execution of the enterprise API strategy at scale across many different teams.

Beyond the technology, the legal department should have a significant influence over APIs going from development to production. Providing a structured framework that can generally apply across all services easily, but also providing granular level control over the fine tuning of legal documents for specific services and use cases. With a handful of clear building blocks in use to help govern the delivery of APIs from the legal side of the equation.

  • Terms of Services - Have universally applicable, and modular, as well as possibly machine readable, and human readable terms of service governing all services from a central location.
  • Privacy Policy - Have universally applicable, and modular, as well as possibly machine readable, and human readable privacy governing all services from a central location, protecting all platform users from harm.
  • Security Policy - Provide a comprehensive security policy that governs how services are secured, reflecting the technologies, checklists, tooling, and reviews that are in use by all team members, providing an overview for all stakeholders to understand.
  • Licensing - Establish clear code, data, and interface licensing to be used across the entire API stack, allowing developers to properly license their services, as well as understand the provenance for the systems and services they depend on.
  • Server Level Agreements - Have universally applicable, modular, as well as possibly machine readable, and human readable service level agreement (SLA) that can be applied across all services, measured, and reported upon as part of wider governance strategy.
  • Scopes - Define and publish the OAuth access scopes as part of a formal set of legal guidance regarding what data is accessible via services, and the role users and developers play in managing scope that is defined by the platform.

The legal department will play an important role in governing APIs as they move from development to production, and there needs to be clear guidance for all developers regarding what is required. Similar to the technical, and business elements of delivering services, the legal components need to be easy to access, understand, and apply, but also make sure and protect the interests of everyone involved with the delivery and operation of enterprise services.

Making APIs Available to Consumers

The next step in the life cycle of properly governed APIs is making them available to consumer after they’ve been published to a production environment. The governing of APIs is not limited to the technical side of things, and this is where we begin understanding how to consistently deliver, measure, and understand the impact of API resources across the many consumers who are integrating the valuable resources into their applications. Shining a light on the business and politics of how digital assets are being put to use across the enterprise.

This portion of this governance research is intended to provide a basic list of the building blocks used by enterprise groups to help reduce friction when putting APIs to work, but also make sense of how consumers are using API resources, establishing a feedback loop that guide the road map for the future of the platform. Taking us back to the beginning of this research and informing how we should be targeting the development of new APIs, the evolution of existing services, and in many cases the deprecation and archiving of services. Ensuring governance goes well beyond the technical details, and making sure they are benefitting the platform, as well as consumers.

Known Consumers

Making your APIs available to consumers requires doing a lot of research on who you are marketing them to, and positioning yourself to speak to an intended audience. Tailoring not just the design of your APIs, but the overall presentation, messaging, and even portal, documentation, and other building blocks to speak to a particular audience. For many API providers, APIs might be made available to multiple audience, in a variety of ways, based upon knowing their customers, and presenting exactly the resources they are needing to get a specific job done.

  • Studies - Conducting regular stories about what internal, partner, and public user groups are using, and needing when it comes to developing applications, and integrating systems.
  • Landscape - Establishing an understanding of the industry landscape for the area being targeted by services, and regularly tuning into and refining the understanding of what consumers are using across that landscape.
  • B2B - Positioning the API to speak to a B2B audience, providing separate portal, documentation, and other resources to cater to a business audience.
  • B2C - Positioning the API to speak to a B2C audience, providing separate portal, documentation, and other resources to cater to a consumer audience.
  • Partners - Providing a unique set of resources that speak to partner groups, providing separate portal, documentation, and other resources to cater to exactly what existing partners will be needing.
  • Internal - Positioning the API to speak to internal groups, providing separate portal, documentation, and other resources to cater to the needs of development groups within the enterprise.
  • Context - Making sure services have knowledge of the context in which they will be delivering resources into. Different patterns, processes, and practices work well within different context, while others will fail, depending on the context that is relevant to the consumer.
  • Office Hours - Holding conference call, or virtual office hours for different consumer groups to be available for discussion around what APIs are available, their supporting resources, as well as contributing to the platform road map.

Knowing your existing, and potential API consumers is essential to position your API program to speak to its intended targets. It is difficult to design and present the right set of resources for an audience you do not understand. Demonstrating how knowing your consumers is something that should happen before you begin the development of services, as well as an ongoing features of a platform, and that understanding the challenges of your consumer, and shifting the road map to stay in alignment with your consumer audience is critical to realizing platform-wide governance

Common Patterns

While studying the consumer outreach strategies of leading API providers, as well as the ones that were interviewed, there are common patterns at play defining how to best reach your audience. The consistency of governed APIs speaks to how to best reach a wide audience, helps increase the impact APIs will have in the applications they are use in, and reduce overall friction and support required to operate them. There are several common patterns present when looking at how organizations are presenting APIs to their consumers publicly and privately.

  • API Design - The design of APIs, using common RESTful resource design patterns, helps present simple, intuitive, and familiar resources that speak to a wide as possible audience.
  • Developer Portals - Consistently designed, easy to navigate, well branded portals provide a familiar, known destination in which consumers can discover, onboard, and stay in touch with where an API platform is going.
  • Documentation - Using common open source documentation across APIs provides an interactive, hands-on way for developers to learn about APIs, understand how to integrate with them, and be able to regularly check back in for added features and benefits that they provide.
  • Definitions - Providing machine readable OpenAPI, and Postman Collections for consumers gives them a portable definition of what an API does, which they can use in their client tooling, to generate code libraries, setup monitors, tests, and generally understand what is possible.

Common design and presentation patterns is one of the reasons many of the leading API providers have established their foothold with their consumers. When you study the approach of Amazon, Google, Twitter, Twilio, Stripe, and other leading API providers, you see that they all use consistent design patterns, as well as provide similar portals, documentation, and other resources for their consumers. Governing the presentation layer for their API driven services, which reflects the consistency consumers are used to when working with multiple API providers across even different business sectors.

Communication

The next aspect of presenting production APIs to consumers involves communication, and ensuring that all stakeholders are kept up to speed on what is happening. Keeping a steady stream of information flowing around the platform, blending it with, and encouraging the activity on feedback loops, with an intent to drive the platform road map.

  • Updates - Providing updates on a blog, or other mechanism, helping keep consumers up to date on what is going on with the platform, and making sure everyone knows there is someone home.
  • Roadmaps - Publishing a public roadmap for API consumers to help them understand what is being planned, and what the future holds. Also maintaining private versions of road map for internal groups, and potentially partners.
  • Issues - Being transparent and communicative around what the current known issues are with a platform, and publishing a list of anything current.
  • Change Logs - Translated the road map, and issues into a change log for the platform showcasing what has historically occurred via a platform.
  • Showcase - A published listing of applications and developers, showcasing the work being done by API consumers, highlighting the value a platform brings to the table.

Maintaining a steady stream of communication around what is happening with an API platform is a clear signal coming from the strongest enterprise API platforms out there. You see regular communication around what is happening, and what is being worked on, with teams reach out to each other, and sharing healthy practices, challenges, and showcasing what is being done with their API resources.

Realizing API Governance

Everything covered so far in this document feeds into what should be considered as part of the overall governance of an API platform, but focuses on the actual delivery of APIs. This is the section where we look at what is needed specifically for governance, and what teams are doing to invest in the governance of APIs across their teams, projects, and the lifecycle of their operations. There are a number of areas we identified that were relevant for groups who are actively realizing governance across their operations.

Structure

One key component of API governance at enterprise organizations who have been doing it a while, and have made significant investment in their efforts, is the presence of organized structure and teams dedicated to advancing governance across the enterprise. While these organizational structures are often defined by many different names, they have some common elements worth noting.

  • Organization - Establishing a formal organization within the enterprise that is dedicated to API infrastructure, and developing a structured approach to governance, and the shared strategy across all teams.
  • Core Team - Beginning with a small, focused, core team within the API strategy and governance, then expanding and growing as it makes sense, and based upon the expansion across the larger enterprise.
  • Enablement Team - Providing an enablement team that can go out and work with individual teams to help enable them to realize healthier API lifecycle practices, and achieve governance objectives.
  • Advisory Board - Developing an advisory board of internal, and possibly external individuals who can provide regular feedback on the API strategy, and help move forward the governance conversation.
  • Legacy Teams - Involving legacy teams in the central API strategy and governance team to make sure the legacy of the enterprise is reflected and understood as efforts evolve and move forward

There were many variations in how enterprise organizations are organizing their API teams, some with more of a centralized approach, with others possessing a decentralized, and more organic feel. Some come straight out of CIO and CTO groups, where others were more bottom-up, organically grown efforts, reflecting the tone of the conversation occurring at different types of enterprise organizations.

The Approach

While there were a number of approaches used to organize and execute on the API governance vision across different enterprise, there were some common approaches, and advice regarding how to do it in a pragmatic way. Providing some key elements to consider as organizations think about forming their own strategy, and putting it into motion at their own enterprise organizations.

  • Start Simple - Keeping things as simple as possible when getting going. Not trying to bite off more than you can chew, and overpromising to the organization. Start with the basics, get involvement, and buy-in, then move forward in a logical fashion.
  • Not Heavy Handed - Refrain from being heavy handed with governance policing and enforcement. It is repeatedly stated that heavy handed efforts get an overwhelming amount of pushback, and can set back efforts significantly.
  • Inline Defined - Provide guidance, education, and artifacts inline. Do not expect people to read governance guides, and understand what is going on from the start. Feed them information on a regular basis through the channels they are already tuned into include corporate communication channels, their integrated development environments (IDE), and other existing entry points people will stay tuned on.
  • Having A Mandate - Think deeply about having a mandate regarding governance. Some people think it is better stated as a mission, rather than a mandate. Considering the negative impact a mandate might have when it comes to adoption, and participation.
  • Select Enforcement - Be creative in how you enforce governance across operations. Be very selective about where you enforce and push back on users. Finding inspiring, and motivational ways to enforce governance, being more carrot than stick.
  • Build Community - Working to build community around the API strategy and governance organization, building relationships across teams, recruiting advocates, and working to train and education on a regular basis.
  • Evangelism - Spending a significant amount of time reaching out to internal stakeholders, partner contacts, also the public when it makes sense, but most importantly, always be evangelizing to management and leadership.

Moving beyond just a group of people in name only and having an structured and planned approach to executing on the API governance across an organization on a daily, weekly, monthly, quarterly, and annual basis. Establishing a deliberate tone to the API governance effort, measuring it’s impact across groups, and adjusting and evolving as required. Developing a strong voice, and measured approach over time, while understanding what works, and what doesn’t work-being agile and flexible in how APIs are governed.

Technology

Building upon the technological layers present in every previous section of this report, we wanted to take another look at how technology is used specifically for defining, measuring, and reporting on governance efforts. Tracking on the specific technological solutions that enterprise groups are using to understand, as well as enforce the governance strategy on the ground, in real time.

  • Definitions - Use OpenAPI, Postman Collections, JSON Schema, and other machine readable artifacts available for all stages of the API lifecycle to quantify, measure, and report on how well governance efforts are being realized.
  • Management - The API management layer provides a number of features that help apply policies, rate limit, log, and track on what is happening with all APIs. Primarily used to understand consumer behavior, but can also be used to understand provider, publisher, and developer behavior as well.
  • Gateway - Providing the single point of entry for all services, allowing for transformation, translation, as well as all the features brought to the table by API management solutions. Providing the perfect opportunity to enforce, as well as measure how well governance is being applied across the organization.
  • Logging - Logs shipped centrally will be the most important way that governance efforts measures and reports on what is happening across the enterprise platform. Without a central logging strategy, governance will be flying blind, unable to see into all the services it is supposed to be governing.
  • Monitoring - Making the monitoring of ALL services the default. Tracking all services from multiple regions, and understanding if they are meeting internal or external SLAs. Providing a key benchmark for whether governance is being effective across services.
  • Testing - Getting much more granular and making sure that APIs are doing what they should. Taking plain business assertions, and testing them against APIs using machine-defined tests that can be executed in real-time, and on a schedule.
  • Security - Gathering as much data as possible about how security is being handled, and the results of scanning, monitoring, logging, and authentication around security checkpoints.
  • Reporting - Leverage management, gateway, monitoring, testing and other technology to produce reports on how well governance benchmarks are being met. Allowing the technology to do the measurement and enforcement, as well as reporting of the numbers that can be aggregated into a single set of reports to understand the impact of governance efforts.

When making decision on what technology to use as part of the delivery of API infrastructure, it’s role in the wider governance strategy should be considered. Having services and tooling inline, that can help execute and report upon governance efforts is an important aspect of being able to move forward a governance program across the enterprise. Built in governance is much more likely to be leveraged, than externally mandated tracking and reporting.

Challenges

Every organization we talked to shared their frustrations and stories around the challenges they’ve faced. Many of these have been shared as part of specific section above, but we wanted to focus on the challenges with actually implementing governance itself, and look at some of the solutions groups have found for working around challenges and roadblocks.

  • Co-Creation - Isolated API governance organizations stayed isolated, and groups who co-created the strategy with other teams, and worked to share ownership over the strategy, execution, and road map had much better success meeting their objectives.
  • Buy-In - Getting buy-in from teams is difficult, and many have spoke of the challenges getting buy-in to the need for a centralized, or even distributed governance approach. Making it difficult to move forward when you don’t have the buy-in of some groups or management.
  • Standards - While many agreed standards were good, actually getting people on the ground to adopt, use, and realize the benefits of standards across all stops along the API lifecycle has proven elusive.
  • Artifacts - There just wasn’t agreement that definitive governance artifacts like guides, prototypes, and other common solutions were necessary. Some teams just disregard these artifacts, questioning the investment of time and energy to create. While others felt strongly that they were necessary to lead through example.
  • Difficult Process - Over and over, API teams express that governance, and pushing for consistency across the API lifecycle was a difficult process. It sounds easy when you plan the strategy, but actually doing it on the ground never works out as you envision.
  • Refine Process - You will be constantly refining your governance strategy, adjusting, removing, tweaking, and shifting the approach until you find solutions to incremental aspects of delivering APIs.
  • Takes Time - All of this will take time. Be patient. Play the long game. Understand it will take much longer than you expected to see the change you envision.

There will be more challenges along the way than there will be wins, when it comes to governance of vast, complex API infrastructure. Challenges, roadblocks, and friction will exist at all stages of standardizing how APIs are delivered across the enterprise. Dealing with failure, and recognizing challenges and the potential for them to be a roadblock is important to being able to keep moving forward at any pace.

The Road To API Governance

There has been a significant uptick in the number of companies, organizations, institutions, and government agencies doing APIs since 2010, to meet the demands of web, mobile, and device applications. A very small percentage of these entities have any sort of formal governance strategy in motion to address how APIs will be delivered across their organizations. Most API providers are living in the moment, realizing they need to be addressing governance, but struggling to overcome a handful of common roadblocks.

  • People - A lack of awareness, training, and communication amongst stakeholders is the biggest challenge API governance efforts face. Do not underestimate the people when crafting a technology focused effort, otherwise the people variable will be what brings it down.
  • Culture - Plan for how the governance will address the culture within an organization. This is where the studies, outreach, workshops, and planning will come into play. Plan for everything taking 5 to 10 times longer than you anticipate because of the thickness, and resistance of organizational culture.
  • Problems - Count on problems coming up everywhere. Dedicate a significant amount of time and resources to identifying, thinking through, and address problems that come up. Do not let problems fester, go ignored, or address.
  • Existing - Map API governance efforts to the existing realities. Yes, the objective is to move the delivery of APIs to a specific destination, but the strategy needs to be rooted in what is existing, building a bridge to where we want to be.

Not all organizations will be ready for capital “G” governance, and many will have to accept inline, ongoing, lower case “g” governance. Doing what they can, with what resources they have, evangelizing, building community, and consensus along the way. While an organized, centralized, well funded governance program is ideal, and can achieve a lot, a significant amount can be done with a scrappier approach, until more traction and resources are achieved.

In Conclusion

This report pulls together several years of research, combined with a handful of interviews with API professionals who are pushing forward the API governance conversation at their enterprise organizations. It acknowledges that the discipline of API governance is more discussion, than it is a formal discipline as of 2018. There are many ways in which API providers are governing their APIs, but few that have a formalized API governance strategy and program, and even fewer that are sharing their strategy, or lack of one in a public manner.

The objective with this report is to pull together as much information regarding how organizations are governing their APIs, and assemble the findings in the following logical order, reflecting how an organization might approach governance on the ground:

  • Roles Within An Organization - Who is needed to make this happen?
  • Design Software Architecture - Laying the foundation for governance.
  • Identifying Potential APIs - Defining the right resources to expose.
  • Defining Data Models & Standards - working to standardize how things are done.
  • Development to Production - Moving from idea to reality in a standard way.
  • Making APIs Available to Consumers - Exposing resources properly to consumers.
  • Realizing API Governance - Moving towards a structured vision of governance.
  • The Road To API Governance - Acknowledging governance is more vision than reality.

Not every detail in this report will apply to the VA, or any other enterprise organization looking to establish a wider API governance strategy. It is meant to be educational, enlightening, and show the scope of how enterprise groups are addressing governance. Allowing enterprise API efforts to learn from each other, and hopefully even share more stories regarding the challenges they face, and the success they are finding–no matter how small.

Hopefully this report reflects a patchwork of things that should be considered, rather than a complete list of what has to be done. There is no such thing as the perfect governance strategy for any API program. There is however, a great deal of things that can be done when you have the right team, the right amount of enthusiasm, and a positive outlook on what governance means. Addressing early on some of the negative perceptions that will exist out there about governance, and how it is something that comes from the top, and how it has the potential to not give regular people at the front lines a voice in the process–this is a myth, it doesn’t have to be the reality.

A definition for governance from the Oxford English Dictionary is, “the way in which an organization is managed at the highest level, and the systems for doing this”. Don’t mistake the highest level being about the highest levels of management, and let it be more about the highest levels of strategy across the organization. It is the system for governing a complex machine of API driven gears that make systems and applications work across the enterprise. It is the governance of a machine that has the potential to allow every individual within the enterprise to play an important role in influencing, allowing everyone to contribute, even if they do not work in a technical capacity within the enterprise machine.


Using Jekyll To Look At Microservice API Definitions In Aggregate

I’ve been evaluating microservices at scale, using their OpenAPI definitions to provide API design guidance to each team using Github. I just finished a round of work where I took 20 microservices, evaluated each OpenAPI definition individually, and made design suggestions via an updated OpenAPI definition that I submitted as a Github issue. I was going to submit as a pull request, but I really want the API design guidance to be a suggestions, as well as a learning experience, so I didn’t want them to feel like they had to merge all my suggestions.

After going through all 20 OpenAPI definitions, I took all of them and dumped them into a single local repository where I was running Jekyll via localhost. Then using Liquid I created a handful of web pages for looking at the APIs across all microservices in a single page–publishing two separate reports:

  • API Paths - An alphabetical list of the APIs for all 20 services, listing the paths, verbs, parameters, and headers in a single bulleted list for evaluation.
  • API Schema - An alphabetical list of the APIs for all 20 services, listing the schema definitions, properties, descriptions, and types in a single bulleted list for evaluation.

While each service lives in its own Github repository, and operates independently of each other, I wanted to see them all together, to be able to evaluate their purpose and design patterns from the 100K level. I wanted to see if there was any redundancy in purpose, and if any services overlapped in functionality, as well as trying to see the potential when all API resources were laid out on a single page. It can be hard to see the forest for the trees when working with microservices, and publishing everything as a single document helps flatten things out, and is the equivalent of climbing to the top of the local mountain to get a better view of things.

Jekyll makes this process pretty easy to do by dumping all OpenAPI definitions into a single collection, which I can then navigate easily using Liquid on a single HTML page. Providing a comprehensive report of the functionality across all microservices. I am calling this my microservices monolith report, as it kind of reassembles all the individual microservices into a single view, letting me see the collective functionality available across the project. I’m currently evaluating these service from an API governance perspective, but at a later date I will be taking a look from the API integration perspective, and trying to understand if the services will work in concert to help deliver on a specific application objective.


Github Is Your Asynchronous Microservice Showroom

When you spend time looking at a lot of microservices, across many different organizations, you really begin to get a feel for the ones who have owners / stewards that are thinking about the bigger picture. When people are just focused on what the service does, and not actually how the service will be used, the Github repos tend to be cryptic, out of sync, and don’t really tell a story about what is happening. Github is often just seen as a vehicle for the code to participate in a pipeline, and not about speaking to the rest of the humans and systems involved in the overall microservices concert that is occurring.

Github Is Your Showroom Each microservices is self-contained within a Github repository, making it the showcase for the service. Remember, the service isn’t just the code, and other artifacts buried away in the folders for nobody to understand, unless you understand how to operate the service or continuously deploy the code. It is a service. The service is part of a larger suite of services, and is meant to be understood and reusable by other human beings in the future, potentially long after you are gone, and aren’t present to give a 15 minute presentation in a meeting. Github is your asynchronous microservices showroom, where ANYONE should be able to land, and understand what is happening.

README Is Your Menu The README is the centerpiece of your showroom. ANYONE should be able to land on the README for your service, and immediately get up to speed on what a service does, and where it is in its lifecycle. README should not be written for other developers, it should be written for other humans. It should have a title, and short, concise, plain language description of what a service does, as well as any other relevant details about what a service delivers. The README for each service should be a snapshot of a service at any point in time, demonstrating what it delivers currently, what the road map contains, and the entire history of a service throughout its lifecycle. Every artifact, documentation, and relevant element of a service should be available, and linked to via the README for a service.

An OpenAPI Contact Your OpenAPI (fka Swagger) file is the central contract for your service, and the latest version(s) should be linked to prominently from the README for your service. This JSON or YAML definition isn’t just some output as exhaust from your code, it is the contract that defines the inputs and outputs of your service. This contract will be used to generate code, mocks, sample data, tests, documentation, and drive API governance and orchestration across not just your individual service, but potentially hundreds or thousands of services. An update to date, static representation of your services API contract should always be prominently featured off the README for a service, and ideally located in a consistent folder across services, as services architects, designers, coaches, and governance people will potentially be looking at many different OpenAPI definitions at any given moment.

Issues Dirven Conversation Github Issues aren’t ust for issues. It is a robust, taggable, conversational thread around each individual service. A github issues should be the self-contained conversation that occurs throughout the lifecycle of a service, providing details of what has happened, what the current issues are, and what the future road map will hold. Github Isuses are designed to help organize a robust amount of conversational threads, and allow for exploration and organization using tags, allowing any human to get up to speed quickly on what is going on. Github issues should be an active journal for the work being done on a service, through a monologue by the service owner / steward, and act as the feedback loop with ALL OTHER stakeholders around a service. Github issues for each service will be be how the antrhopolgists decipher the work you did long after you are gone, and should articulate the entire life of each individual service.

Services Working In Concert Each microservice is a self-contained unit of value in a larger orchestra. Each microservice should do one thing and do it well. The state of each services Github repository, README, OpenAPI contract, and the feedback loop around it will impact the overall production. While a service may be delivered to meet a specific application need in its early moments, the README, OpenAPI contract, and feedback loop should attract and speak to any potential future application. A service should be able to reused, and remixed by any application developer building internal, partner, or some day public applications. Not everyone landing on your README will have been in the meetings where you presented your service. Github is your service’s showroom, and where you will be making your first, and ongoing impression on other developers, as well as executives who are poking around.

Leading Through API Governance Your Github repo, README, and OpenAPI contract is being used by overall microservices governance operations to understand how you are designing your services, crafting your schema, and delivering your service. Without an OpenAPI, and README, your service does nothing in the context of API governance, and feeding into the bigger picture, and helping define overall governance. Governance isn’t scripture coming off the mountain and telling you how to operate, it is gathered, extracted, and organized from existing best practices, and leadership across teams. Sure, we bring in outside leadership to help round off the governance guidance, but without a README, OpenAPI, and active feedback loop around each service, your service isn’t participating in the governance lifecycle. It is just an island, doing one thing, and nobody will ever know if it is doing it well.

Making Your Github Presence Count Hopefully this post helps you see your own microservice Github repository through an external lens. Hopefully it will help you shift from Github being just about code, for coders, to something that is part of a larger conversation. If you care about doing microservices well, and you care about the role your service will play in the larger application production, you should be investing in your Github repository being the showroom for your service. Remember, this is a service. Not in a technical sense, but in a business sense. Think about what good service is to you. Think about the services you use as a human being each day. How you like being treated, and how informed you like to be about the service details, pricing, and benefits. Now, visit the Github repositories for your services, think about the people who who will be consuming them in their applications. Does it meet your expectations for a quality level of service? Will someone brand new land on your repo and understand what your service does? Does your microservice do one thing, and does it do it well?


An OpenAPI Vendor Extension For Defining Your API Au

The food delivery service Zalando has an interesting approach to classifying their APIs based upon who is consuming them. It isn’t just about APIs being published publicly, or privately, they actually have standardized their definition, and have established an OpenAPI vendor extension, so that the definition is machine readable and available via their OpenAPI.

According to the Zalando API design guide, “each API must be classified with respect to the intended target audience supposed to consume the API, to facilitate differentiated standards on APIs for discoverability, changeability, quality of design and documentation, as well as permission granting. We differentiate the following API audience groups with clear organisational and legal boundaries.

  • component-internal - The API consumers with this audience are restricted to applications of the same functional component (internal link). All services of a functional component are owned by specific dedicated owner and engineering team. Typical examples are APIs being used by internal helper and worker services or that support service operation.
  • business-unit-internal - The API consumers with this audience are restricted to applications of a specific product portfolio owned by the same business unit.
  • company-internal - The API consumers with this audience are restricted to applications owned by the business units of the same the company (e.g. Zalando company with Zalando SE, Zalando Payments SE & Co. KG. etc.)
  • external-partner - The API consumers with this audience are restricted to applications of business partners of the company owning the API and the company itself.
  • external-public - APIs with this audience can be accessed by anyone with Internet access.

Note: a smaller audience group is intentionally included in the wider group and thus does not need to be declared additionally. The API audience is provided as API meta information in the info-block of the Open API specification and must conform to the following specification:

#/info/x-audience: type: string x-extensible-enum: - component-internal - business-unit-internal - company-internal - external-partner - external-public description: | Intended target audience of the API. Relevant for standards around quality of design and documentation, reviews, discoverability, changeability, and permission granting.

Note: Exactly one audience per API specification is allowed. For this reason a smaller audience group is intentionally included in the wider group and thus does not need to be declared additionally. If parts of your API have a different target audience, we recommend to split API specifications along the target audience — even if this creates redundancies (rationale).

Here is an example of the OpenAPI vendor extension in action, as part of the info block:

swagger: ‘2.0’ info: x-audience: company-internal title: Parcel Helper Service API description: API for <…> version: 1.2.4

Providing a pretty interesting way of establishing the scope and reach of each API in a way that makes each API owner think deeply about who they are / should be targeting with the service. Done in a way that makes the audience focus machine readable, and available as part of it’s OpenAPI definition which can be then used across discovery, documentation, and through API governance and security.

I like the multiple views of who the audience could be, going beyond just public and private APIs. I like that it is an OpenAPI vendor extension. I like that they even have a schema crafted for the vendor extension–another interesting concept I’d like to see more of. Overall, making for a pretty compelling approach to define the reach of our APIs, and quantifying the audience we are looking to reach with each API we publish.


API As A Product Principles From Zalando

As I’m working through the API design guides from API leaders, looking for useful practices that I can include in my own API guidance, I’m finding electronic commerce company Zalando’s API design guide full of some pretty interesting advice. I wanted to showcase the section about their API as a product principles, which I think reflects what I hear many companies striving for when they do APIs.

From the Zalando API design guide principles:

Zalando is transforming from an online shop into an expansive fashion platform comprising a rich set of products following a Software as a Platform (SaaP) model for our business partners. As a company we want to deliver products to our (internal and external) customers which can be consumed like a service.

Platform products provide their functionality via (public) APIs; hence, the design of our APIs should be based on the API as a Product principle:

  • Treat your API as product and act like a product owner
  • Put yourself into the place of your customers; be an advocate for their needs
  • Emphasize simplicity, comprehensibility, and usability of APIs to make them irresistible for client engineers
  • Actively improve and maintain API consistency over the long term
  • Make use of customer feedback and provide service level support

RESTful API as a Product makes the difference between enterprise integration business and agile, innovative product service business built on a platform of APIs.

Based on your concrete customer use cases, you should carefully check the trade-offs of API design variants and avoid short-term server side implementation optimizations at the expense of unnecessary client side obligations and have a high attention on API quality and client developer experience.

API as a Product is closely related to our API First principle which is more focused on how we engineer high quality APIs.

Zalando provides a pretty coherent vision for how we all should be doing APIs. I like this guidance because it helps quantify something we hear a lot–APIs as a product. However, it also focuses in on what is expected of the product owners. It also gets at why companies should be doing APIs in the first place, talking about the benefits they bring to the table.

I’m enjoying the principles section of Zalando’s API design guide. It goes well beyond just API design, and reflects what I consider to be principles for wider API governance. Many companies are still considering this API design guidance, but I find that companies who are publishing these documents publicly are often maturing and moving beyond just thinking deeply about design–providing a wealth of other wisdom when it comes to doing APIs right.


An OpenAPI-Driven, API Governance Rules Engine

Phil Sturgeon (@philsturgeon) alerted me to a pretty cool project he is cooking up, called Speccy. Which provides a rules engine for validating your OpenAPI definitions. “Taking off from where Mike Ralphson started with linting in swagger2openapi, Speccy aims to become the rubocop or eslint of OpenAPI”, and to “sniff your files for potentially bad things. “Bad” is objective, but you’ll see validation errors, along with special rules for making your APIs better.” Helping make sure your API definitions are as consistent as they possibly can be, and deliver on your API governance strategy (you have one right?)

With Speccy, there are a default set of rules, things like ensuring you have a summary or a description for each API path:

``{ “name”: “operation-summary-or-description”,

"object": "operation",

"enabled": true,

"description": "operation should have summary or description",

"or": ["summary", "description"]        

}``

Or making sure you add descriptions to your parameters:

``{

"name": "parameter-description",

"object": "parameter",

"enabled": true,

"description": "parameter objects should have a description",

"truthy": "description"

}``

Or making sure you include tags for each aPI path:

``{ “name”: “operation-tags”,

"object": "operation",

"enabled": true,

"description": "operation should have non-empty tags array",

"truthy": "tags",

"skip": "isCallback"

}`` Then you can get more strict by requiring contact information:

``{ “name”: “contact-properties”,

"object": "contact",

"enabled": true,

"description": "contact object should have name, url and email",

"truthy": [ "name", "url", "email" ]

}``

And make sure youi have a license applied to your API:

``{ “name”: “license-url”,

"object": "license",

"enabled": true,

"description": "license object should include url",

"truthy": "url"

}``

Speccy is available as a Node package, which you can easily run at the command line. Speccy is definitely what is needed out there right now, helping us validate the growing number of OpenAPI definitions in our life. As many companies are thinking about how they can apply API governance across their operations, they should be looking at contributing to Speccy. It is something I’ve been talking with API service providers about for some time, but haven’t seen an open source answer emerge, that can help us develop rules for what we expect of our OpenAPI definitions.

My only feedback right now, is that we need lots of people using it, and helping contribute rules. Oh, and wrap it in an API, and make it available as an easy to use, and deploy containerized microservice. Then lets get to work on the Github Gist dirven marketplace of rules, where I can publish the rules I develop across the projects I’m working on, and of the clients I consult with. Let’s get to work making sure there are a wealth of rules, broken down into different categories for API providers to choose from. Then let’s get API tooling and service providers to begin baking a Speccy rules engine into their solutions, and allow for the import and management of open source rules.

Speccy only works with OpenAPI 3.0, which makes sense if we are going to be moving forward with this conversation. Spreccy is how we will validate that banking APIs are PSD2 compliant. It is how we will ensure healthcare APIs support the FHIR specification. I have other suggestions for the CLI and API usage of Speccy, but I’d rather see investment in the available rules, before I make too many functional suggestions. I think the rules are where we will begin to define what we are looking for in an OpenAPI rules engine, and that should drive the Speccy features which end up on the road map.


Three Areas I Would Like To Cover When We Sit Down For An API Consulting Session

I’m putting together some presentations for a handful of upcoming engagements, where I’m wanting to help my audience understand what an initial engagement will look like. While I am looking to have just a handful of bullets that can live on a single, or handful of slides, I also want a richer narrative to go along with it. To achieve this I rely on my blog, which helps me work my way through the details of what I do, and distill things down into something that I can deliver on the ground within the companies, organizations, institutions, and government agencies I am conducting business with.

When I am sitting down with a new audience, and working to help them understand how I can help them begin, jumpstart, revive, and move forward with their API journey, I’m usually breaking things into three main areas:

  • Landscape Mapping - Establish a map of what currently is within an organization.
    • Internal Resources - What existing web services, APIs, teams, and resources exist?
    • External Objectives - What are the external objectives of doing APIs?
  • Strategy Development - Craft a coherent strategy for moving forward with APIs.
    • API Lifecycle - Lay out a step by step list of stops along a modern API life cycle.
    • API Support - Identify how the strategy and operations will be supported within an organization.
    • API Evangelism - Consider how the message around API operations will spread internally, and externally.
  • Execution - Identify a clear set of next steps regarding how APIs will evolve.
    • Infrastructure - What services, tooling, and other API infrastructure is needed?
    • Resources - What resources have been identified for moving the API conversation forward?
    • Governance - What is the governance strategy for measuring, reporting upon, and enforcing the deliver of APIs across the API lifecycle presented.

When I present to a new group of people within an organization, this is the outline I am looking to flesh out. I have to understand what is already occurring (or not) on the ground, which is why I need the landscape map. Then, borrowing from my existing API research I can help develop a a detailed strategy, which includes the critical elements of how we will be supporting and evangelizing the effort–which without, API efforts will always struggle. After that, I want to quickly get to work on how we will be executing on this vision, even if it just involves more investment in the landscape map, and overall strategy.

I am working on more detailed materials to hand out prior to, and at the time I sit down with new clients, but I wanted to articulate in a single page, and using a simple set of bullets what I am looking to accomplish with any new consulting relationship. With a map in hand, and an strategy in mind, I’m confident that I can help folks I talk with move forward with their API journey in a more meaningful way. Something not everyone I talk with is confident in doing on their own, but with a little assistance, I’m pretty sure they will be able to get to work defining what the API journey will look like for their organization.


A Summary Of Kong As An API Management Solution

I was breaking down what the API management solution Kong delivers for a customer of mine, and I figured I’d take what I shared via the team portal, and publish here on the blog. It is an easy way for me to create content, and make my consulting work more transparent here on the blog. I am using Kong as part of several healthcare and financial projects currently, and I am actively employing it to ensure customers are properly managing their APIs. I wasn’t the decision maker on any of these projects when it came to choosing the API management layer, I am just the person who is helping standardize how they are using API services and tooling across the API life cycle for these projects.

First, Kong is an open source API management solution with an easy to install community edition, and enterprise level support when needed. They provide an admin interface, and developer portal for the API management proxy, but there is also a growing number of community editions like KongDash, and Konga emerging to make it a much more richer ecosystem. And of course, Kong has an API for managing the API management layer, as every API service and tooling provider should have.

Now, let’s talk about what Kong does for helping in the deploying of your APIs:

  • API Routing - The API object describes an API that’s being exposed by Kong. Kong needs to know how to retrieve the API when a consumer is calling it from the Proxy port. Each API object must specify some combination of hosts, uris, and methods
  • Consumers - The Consumer object represents a consumer - or a user - of an API. You can either rely on Kong as the primary datastore, or you can map the consumer list with your database to keep consistency between Kong and your existing primary datastore.
  • Certificates - A certificate object represents a public certificate/private key pair for an SSL certificate.
  • Server Name Indication (SNI) - An SNI object represents a many-to-one mapping of hostnames to a certificate.

Then it focuses on the core aspects of what is needed to help manage your APIs:

  • Authentication - Protect your services with an authentication layer.
  • Traffic Control - Manage, throttle, and restrict inbound and outbound API traffic.
  • Analytics - Visualize, inspect, and monitor APIs and microservice traffic.
  • Transformations - Transform requests and responses on the fly.
  • Logging - Stream request and response data to logging solutions.

After that, it has a bunch of added features to help make it a scalable, evolvable solution:

  • DNS-based loadbalancing - When using DNS based load balancing the registration of the backend services is done outside of Kong, and Kong only receives updates from the DNS server.
  • Ring-balancer - When using the ring-balancer, the adding and removing of backend services will be handled by Kong, and no DNS updates will be necessary.
  • Clustering - A Kong cluster allows you to scale the system horizontally by adding more machines to handle more incoming requests. They will all share the same configuration since they point to the same database. Kong nodes pointing to the same datastore will be part of the same Kong cluster.
  • Plugins - lua-nginx-module enables Lua scripting capabilities in Nginx. Instead of compiling Nginx with this module, Kong is distributed along with OpenResty, which already includes lua-nginx-module. OpenResty is not a fork of Nginx, but a bundle of modules extending its capabilities.
  • API - Administrative API access for programmatic control.
  • CLI Reference - The provided CLI (Command Line Interface) allows you to start, stop, and manage your Kong instances. The CLI manages your local node (as in, on the current machine).
  • Serverless - Invoke serverless functions via APIs.

There are a number of API management solutions available out there today. I will profile each one I am actively using as part of my work on the ground. I’m agnostic towards which provider my clients should use, but I like having the details about what features they bring to the table readily available via a single URL, so that I can share when these conversations come up. I have many API management solutions profiled as part of my API management research, but in 2018 there are just a handful of clear leaders in the game. I’ll be focusing on the ones who are still actively investing in the API community, and the ones I have an existing relationship with in a partnership capacity. Streamdata.io is a reseller of Kong in France, making it something I’m actively working with in the financial space, and also something I’m using within the federal government, also bringing it front and center for me in the United States.

If you have more questions about Kong, or any other API management solution, feel free to reach out, and I’ll do my best to answer any questions. We are also working to provide more API life cycle, strategy, and governance services along with my government API partners at Skylight, and through my mainstream API partners at Streamdata.io. If you need help understanding the landscape and where API management solutions like Kong fits in, me and my partners are happy to help out–just let us know.


Streaming And Event-Driven Architecture Represents Maturity In The API Journey

Working with Streamdata.io has forced a shift in how I see the API landscape. When I started working with their proxy I simply saw it about doing API in real time. I was hesitant because not every API had real time needs, so I viewed what they do as just a single tool in my API toolbox. While Server-Sent Events, and proxying JSON APIs is just one tool in my toolbox, like the rest of the tools in my toolbox it forces me to think through what an API does, and understand where it exists in the landscape, and where the API provider exists in their API journey. Something I’m hoping the API providers are also doing, but I enjoy doing from the outside-in as well.

Taking any data, content, media, or algorithm and exposing as an API, is a journey. It is about understanding what that resource is, what it does, and what it means to the provider and the consumer. What this looks like day one, will be different from what it looks like day 365 (hopefully). If done right, you are engaging with consumers, and evolving your definition of the resource, and what is possible when you apply it programmatically through the interfaces you provide. API providers who do this right, are leveraging feedback loops in place with consumers, iterating on their APIs, as well as the resources they provide access to, and improving upon them.

Just doing simple web APIs puts you on this journey. As you evolve along this road you will begin to also apply other tools. You might have the need for webhooks to start responding to meaningful events that are beginning to emerge across the API landscape, and start doing the work of defining your event-driven architecture, developing lists of most meaningful topics, and events that are occurring across your evolving API platform. Webhooks provide direct value by pushing data and content to your API consumers, but they have indirect value in helping you define the event structure across your very request and response driven resource landscape. Look at Github webhook events, or Slack webhook events to understand what I mean.

API platforms that have had webhooks in operation for some time have matured significantly towards and event-driven architecture. Streaming APIs isn’t simply a boolean thing. That you have data that needs to be streamed, or you don’t. That is the easy, lazy way of thinking about things. Server-Sent Events (SSE) isn’t just something you need, or you don’t. It is something that you are ready for, or you aren’t. Like webhooks, I’m seeing Server-Sent Events (SSE) as having the direct benefits of delivering data and content as it is updated, to the browser or for other server uses. However, I’m beginning to see the other indirect benefits of SSE, and how it helps define the real time nature of a platform–what is real time? It also helps you think through the size, scope, and efficiency surrounding the use of APIs for making data, content, and algorithms available via the web. Helping us think through how and when we are delivering the bits and bytes we need to get business done.

I’m learning a lot by applying Streamdata.io to simple JSON APIs. It is adding another dimension to the API design, deployment, and management process for me. There has always been an evolutionary aspect of doing APIs for me. This is why you hear me call it the API journey on a regular basis. However, now that I’m studying event-driven architecture, and thinking about how tools like webhooks and SSE assist us in this journey, I’m seeing an entirely new maturity layer for this API journey emerge. It goes beyond just getting to know our resources as part of the API design, and deployment process. It builds upon API management and monitoring and helps us think through how our APIs are being consumed, and what the most meaningful and valuable events are. Helping us think through how we deliver data and content over the web in a more precise manner. It is something that not every API provider will understand right away, and only those a little further along in their journey will be able to take advantage of. The question is, how do we help others see the benefits, and want to do the hard work to get further along in their own API journey.


You Have to Know Where All Your APIs Are Before You Can Deliver On API Governance

I wrote an earlier article that basic API design guidelines are your first step towards API governance, but I wanted to introduce another first step you should be taking even before basic API design guides–cataloging all of your APIs. I’m regularly surprised by the number of companies I’m talking with who don’t even know where all of their APIs are. Sometimes, but not always, there is some sort of API directory or catalog in place, but often times it is out of date, and people just aren’t registering their APIs, or following any common approach to delivering APIs within an organization–hence the need for API governance.

My recommendation is that even before you start thinking about what your governance will look like, or even mention the word to anyone, you take inventory of what is already happening. Develop an org chart, and begin having conversations. Identify EVERYONE who is developing APIs, and start tracking on how they are doing what they do. Sure, you want to get an inventory of all the APIs each individual or team is developing or operating, but you should also be documenting all the tooling, services, and processes they employ as part of their workflow. Ideally, there is some sort of continuous deployment workflow in place, but this isn’t a reality in many of the organization I work with, so mapping out how things get done is often the first order of business.

One of the biggest failures of API governance I see is that the strategy has no plan for how we get from where we are to where we ant to be, it simply focuses on where we want to be. This type of approach contributes significantly to pissing people off right out of the gate, making API governance a lot more difficult. Stop focusing on where you want to be for a moment, and focus on where you are. Build a map of where people are, tools, services, skills, best and worst practices. Develop a comprehensive map of where organization is today, and then sit down with all stakeholders to evaluate what can be improved upon, and streamlined. Beginning the hard work of building a bridge between your existing teams and what might end up being a future API governance strategy.

API design is definitely the first logical step of your API governance strategy, standardizing how you design your APIs, but this shouldn’t be developed from the outside-in. It should be developed from what already exists within your organization, and then begin mapping to healthy API design practices from across the industry. Make sure you are involving everyone you’ve reached out to as part of inventory of APIs, tools, services, and people. Make sure they have a voice in crafting that first draft of API design guidelines you bring to the table. Without buy-in from everyone involved, you are going to have a much harder time ever reaching the point where you can call what you are doing governance, let alone seeing the results you desire across your API operations.


Basic API Design Guidelines Are Your First Step Towards API Governance

I am working with a group that has begun defining their API governance strategy. We’ve discussed a full spectrum of API lifecycle capabilities that need to be integrated into their development practices, and CI/CD workflow, as well as eventually their API governance documentation. However, they are just getting going with the concept of API governance, and I want to make sure they don’t get ahead of themselves and start piling in too much into their API governance documentation, before they can get buy in, and participation from other groups.

We are approaching the first draft of an API governance document for the organization, and while it has lofty aspirations, the first draft is really nothing more than some basic API design guidelines. It is basically a two-page document that explains why REST is good, provides guidance on naming paths, using your verbs, and a handful of other API design practices. While I have a much longer list of items I want to see added to the document, I feel it is much more important to get the basic first draft up, circulated amongst groups, and establishing feedback loops, than making sure the API governance document is comprehensive. Without buy-in from all groups, any API governance strategy will be ignored, and ultimately suffocated by teams who feel like they don’t have any ownership in the process.

I am lobbying that the API governance strategy be versioned and evolved much like any other artifact, code, or documentation applied across API operations. This is v1 of the API governance, and before we can iterate towards v2, we need to get feedback, accept issues, comments, and allow for pull requests on the strategy before it moves forward. It is critical that ALL teams feel like they have been part of the conversation from day one, otherwise it can be weakened as a strategy, and any team looking to implement, coach, advise, report on, and enforce will be hobbled. API governance advocates always wish for things to move forward at a faster speed, but the reality within large organizations will require more consensus, or at least involvement, which will move forward at a variety of speeds depending on the size of the organization.

This process has been a reminder for me, and hopefully for my readers who are looking to get started on their API governance strategy. Always start small. Get your first draft up. Start with the basics of how you define and design your APIs, and then begin to flesh out the finer details of design, deployment, management, testing, and the other stops along your lifecycle. Just get your basic version documentation and guidance published. Maybe even consider calling it something other than governance from day one. Come up with a much more friendly name, that might not turn your various teams off, and then once it matures you can call it what it is, after everyone is participating, and has buy-in regarding the overall API governance strategy for your platform.


From CI/CD To A Continuous Everything (CE) Workflow

I am evaluating an existing continuous integration and deployment workflow to make recommendations regarding how they can evolve to service their growing API lifecycle. This is an area of my research that spans multiple areas of my work, but I tend to house under what I call API orchestration. I try to always step back and look at an evolving area of the tech space as part of the big picture, and attempt to look beyond any individual company, or even the wider industry hype in place that is moving something forward. I see the clear technical benefits of CI/CD, and I see the business benefits of it as well, but I haven’t always been convinced of it as a standalone thing, and have spent the last couple of years trying understand how it fits into the bigger picture.

As I’ve been consulting with several enterprise groups working to adopt a CI/CD mindset, and having similar conversations with government agencies, I’m beginning to see the bigger picture of “continuous”, and starting to decouple it from just deployment and even integration. The first thing that is assumed, not always evident for newbies, but is always a default–is testing. You alway test before you integrate or deploy, right? As I watch groups adopt I’m seeing them struggle with making sure there are other things I feel are an obvious part of the API lifecycle, but aren’t default in a CI/CD mindset, but quickly are being plugged in–things like security, licensing, documentation, discovery, support, communications, etc. In the end, I think us technologists are good at focusing on the tech innovations, but often move right past many of the other things that are essential for the business world. I see this happening with containers, microservices, Kubernetes, Kafka, and other fast moving trends.

I guess the point I want to make is that there is more to a pipeline than just deployment, integration, and testing. We need to make sure that documentation, discovery, security, and other essentials are baked in by default. Otherwise us techies might be continuously forgetting about these aspects, and the newbies might be continuously frustrated that these things aren’t present. We need to make sure we are continuously documenting, continuously securing, and continuously communicating around training, and our continuously evolving (and sharing) our road maps. I’m sure what I’m saying isn’t anything new for the CI/CD veterans, but I’m trying to onboard new folks with the concept, and as with most areas of the tech sector I find the naming and on-boarding materials fairly deficient in possessing all the concepts large organizations are needing to make the shift.

I’m thinking I’m going to be merging my API orchestration (CI/CD) research with my overall API lifecycle research, thinking deeply about how everything from definition to deprecation fits into the pipeline. I feel like CI/CD has been highly focused on the technology of evolving how we deploy and integrate (rightfully so) for some time now, and with adoption expanding we need to zoom out and think about everything else organizations will need to be successful. I see CI/CD as being essential to decoupling the monolith, and changing culture at some of the large organizations I’m working with. I want these folks to be successful, and not fall into the trappings of only thinking about the tech, but also consider the business and political implications involved with being able to move from annual or quarterly deployments and integrations, to where they can do things in weeks, or even days.


API Deployment Templates As Part Of A Wider API Governance Strategy

People have been asking me for more stories on API governance. Examples of how it is working, or not working at the companies, organizations, institutions, and government agencies I’m talking with. Some folks are looking for top down ways of controlling large teams of developers when it comes to delivering APIs consistently across large disparate organizations, while others are looking for bottom ways to educate and incentivize developers to operate APIs in sync, working together as a large, distributed engine.

I’m approach my research into API governance as I would any other area, not from the bottom up, or top down. I’m just assembling all the building blocks I come across, then began to assemble them into a coherent picture of what is working, and what is not. One example I’ve found of an approach to helping API providers across the federal government better implement consistent API patterns is out of the General Services Administration (GSA), with the Prototype City Pairs API. The Github repository is a working API prototype, documentation and developer portal that is in alignment with the GSA API design guidelines, providing a working example that other API developers can reverse engineer.

The Prototype City Pairs API is a forkable example of what you want developers to emulate in their work. It is a tool in the GSA’s API governance toolbox. It demonstrates what developers should be working towards in not just their API design, but also the supporting portal and documentation. The GSA leads by example. Providing a pretty compelling approach to model, and a building block any API provider could add to their toolbox. I would consider a working prototype to be both a bottom up approach because it is forkable, and usable, but also top down because it can reflect wider organizational API governance objectives.

I could see mature API governance operations having multiple API design and deployment templates like the GSA has done, providing a suite of forkable, reusable API templates that developers can put to use. While not all developers would use, in my experience many teams are actually made up of reverse engineers, who tend to emulate what they know. If they are exposed to bad API design, they tend to just emulate that, but if they are given robust, well-defined examples, they will just emulate healthy patterns. I’m adding API deployment templates to my API governance research, and will keep rounding off strategies for successful API governance, that can work at a wide variety of organizations, and platforms. As it stands, there are not very many examples out there, and I’m hoping to pull together any of the pieces I can find into a coherent set of approaches folks can choose from when crafting their own approach.


Narrowing In On My API Governance Strategy Using API Transit To Map Out PSD2

I’m still kicking around my API Transit strategy in my head, trying to find a path forward with applying to API governance. I started moving it forward a couple years ago as a way to map out the API lifecycle, but in my experience, managing APIs are rarely a linear lifecycle. I have been captivated by the potential of the subway map to help us map out, understand, and navigate complex infrastructure since I learned about Harry Beck’s approach to the London Tube map which has become the standard for quantifying transit around the globe.

I am borrowing from Beck’s work, but augmenting for a digital world to try and map out the API practices I study in my research of the space in a way that allow them to be explored, but also implemented, measured, and reported upon by all stakeholders involved with API operations. While I’m still pushing forward this concept in the safe space of my own API projects, I’m beginning to dabble with applying at the industry level, by applying to PSD2 banking, and seeing if I can’t provide an interactive map that helps folks see, understand, and navigate what is going on when it comes to banking APIs.

An API Transit map for PSD2 would build upon the framework I have derived from my API research, applied specifically for quantifying the PSD2 world. Each of the areas of my research broken down into separate subway lines, that can be plotted along the map with relative stops along they way:

  • Definition - Which definitions are used? Where are the OpenAPI, schema, and other relevant patterns.
  • Design - What design patterns are in play across the API definitions, and what is the meaning behind the design of all APIs.
  • Deployment - What does deployment look like on-premise, in the cloud, and from region to region.
  • Portals - What is the minimum viable standard for an API portal presence with any building blocks.
  • Management - Quantify the standard approaches to managing APIs from on-boarding to analysis and reporting.
  • Plans - How are access tiers and plans defined, providing 3rd party access to APIs, including that of aggregators and application developers.
  • Monitoring - What does monitoring of web APIs look like, and how is data aggregated and shared.
  • Testing - What does testing of web APIs look like, and how is data aggregated and shared.
  • Performance - What does performance evaluation of web APIs look like, and how is data aggregated and shared.
  • Security - What are the security practices in place for the entire API stack?
  • Breaches - When there is a breach, what is the protocol, and practices surrounding what should happen–where is the historical data as well.
  • Terms of Service - What does terms of service across many APIs look like?
  • Privacy Policy - How is privacy protected across API operations?
  • Support - What are all the expected support channels, and where are they located?
  • Road Map - What is expected, and where do we find the road map and change log for the platform?

These are just a handful of the lines I will be laying out as part of my subway map. I have others I want to add, but this provides a nice version of what I”d like to see as an API Transit map of the PSD2 universe. Each line would have numerous stops that would provide resources and potentially tooling to help educate, quantify, and walk people through each of these areas in detail, but in the context of PSD2, and the banking industry. This where I’m beginning to push the subway map context further to help make work in a virtualized world, and augmenting with some concepts I hope will add new dimensions to how we understand, and navigate our digital worlds, but using the subway map as a skeuomorph.

To help make the PSD2 landscape I’m mapping out more valuable I am playing with adding a “tour” layer, which allows me to craft tours that cover specific lines, hitting only the stops that matter, bridges multiple lines, and creates a meaningful tour for a specific audience. Here are a handful of the tours I’m planning for PSD2:

  • Introduction - A simple introduction to the concepts at play when it comes to the PSD2 landscape.
  • Provider Training - A detailed training walk-through for anyone looking to provide a PSD2 compliant platform.
  • Provider Certification - A detailed walkthrough that gathers information and detail to map out, quantity, and assess a specific PSD2 API / platform.
  • Executive - A robust walk-through of the concepts at play for an executive from the 100K view, as well as those of their own companies PSD2 certified API, and possibly those of competitors.
  • Regulator - A comprehensive walk through the entire landscape, including what is required, as well as the certification of individual PSD2 API platforms, with real-time control dashboard.

These are just a few of the areas I’m looking to provide tours through this quantified PSD2 API Transit landscape. I am using Github to deploy, and evolve my maps, which leverages Jekyll as a Hypermedia client to deliver the API Transit experience. While each line of the API Transit map has it’s own hypermedia flow for storing and experiencing each stop along the line, the tours also have its own hypermedia flows which can augment existing lines and stops, as well as inject their own text, images, audio, video, links and tooling along the way.

The result will be a single URL which anyone can land on for the PSD2 API Transit platform. You can choose from any of the pre-crafted tours, or just begin exploring each line, getting off at only the stops that interest you. Some stops will be destinations, while others will provide transfers to other lines. I’m going to be investing some cycles into my PSD2 API Transit platform over the holidays. If you have any questions, comments, input, or would like to invest in my work, please let me know. I’m always looking for feedback, as well as interested parties to help fund my work and ensure I can carve out the time to make them happen.


The API Coaches At Capital One

API evangelism and even advocacy at many organizations has always been a challenge to introduce, because many groups aren’t really well versed in the discipline, and often times it tends to take on a more marketing or even sales like approach, which can hurt its impact. I’ve worked with groups to rebrand, and change how they evangelize APIs internally, with partners, and the public, trying to ensure the efforts are more effective. While I still bundle all of this under my API evangelism research, I am always looking for new approaches that push the boundaries, and evolve what we know as API evangelism, advocacy, outreach, and other variations.

I was introduced to a new variation of the internal API evangelism concept a few weeks back while at Capital One talking with my friend Matthew Reinbold(@libel_vox) about their approach to API governance. His team at the Capital One API Center of Excellence has the concept of the API coach, and I think Matt’s description from his recent API governance blueprint story sums it up well:

_At minimum, the standards must be a journey, not a destination. A key component to “selective standardization” is knowing what to select. It is one thing for us in our ivory tower to throw darts at market forces and team needs. It is entirely another to repeatedly engage with those doing the work.

Our coaching effort identifies those passionate practitioners throughout our lines of business who have raised their hands and said, “getting this right is important to my teams and me”. Coaches not only receive additional training that they then apply to their teams. They also earn access to evolving our standards.

In this way, standards aren’t something that are dictated to teams. Teams drive the standards. These aren’t alien requirements from another planet. They see their own needs and concerns reflected back at them. That is an incredibly powerful motivator toward acceptance and buy-in._

A significant difference here between internal API evangelism and API coaching is you aren’t just pushing the concept of APIs (evangelizing), you are going the extra mile to focus on healthy practices, standards, and API governance. Evangelism is often seen as an API provider to API consumer effort, which doesn’t always translate to API governance internally across organizations who are developing, deploying, and managing APIs. API coaches aren’t just developing API awareness across organizations, they are cultivating a standardized, bottom up, as well as top down awareness around providing and consuming APIs. Providing a much more advanced look at what is needed across larger organizations, when it comes to outreach and communication.

Another interesting aspect of Capital One’s approach to API coaching, is that this isn’t just about top down governance, it has a bottom up, team-centered, and very organic approach to API governance. It is about standardizing, and evolving culture across many organizations, but in a way that allows team to have a voice, and not just be mandated what the rules are, and required to comply. The absence of this type of mindset is the biggest contributor to a lack of API governance we see across the API community today. The is what I consider the politics of APIs, something that often trumps the technology of all of this.

API coaching augments my API evangelism research in a new and interesting way. It also dovetails with my API design research, as well as begins rounding off a new area I’ve wanted to add for some time, but just have not see enough activity in to warrant doing so–API governance. I’m not a big fan of the top down governance that was given to us by our SOA grandfathers, and the API space has largely been doing alright without the presence of API governance, but I feel like it is approaching the phase where a lack of governance will begin to do more harm than good. It’s a drum I will start beating, with the help of Matt and his teams work at Capital One. I’m going to reach out to some of the other folks I’ve talked with about API governance in the past, and see if I can produce enough research to get the ball rolling.


Learning About API Governance From Capital One DevExchange

I am still working through my notes from a recent visit to Capital One, where I spent time talking with Matthew Reinbold (@libel_vox) about their API governance strategy. I was given a walk through their approach to defining API standards across groups, as well as how they incentivize, encourage, and even measure what is happening. I’m still processing my notes from our talk, and waiting to see Matt publish more on his work, before I publish too many details, but I think it is worth looking at from a high level view, setting the bar for other API governance conversations I am engaging in.

First, what is API governance. I personally know that many of my readers have a lot of misconceptions about what it is, and what it isn’t. I’m not interesting in defining a single definition of API governance. I am hoping to help define it so that you can find it a version of it that you can apply across your API operations. API governance is at its simplest form, about ensuring consistency in how you do API across your development groups, and a more robust definition might be about having an individual or team dedicated to establishing organization-wide API standards, helping train, educate, enforce, and in the case of capital one, measure their success.

Before you can begin thinking about API governance, you need to start establishing what your API standards are. In my experience this usually begins with API design, but should also quickly also be about consistent, API deployment, management, monitoring, testing, SDKs, clients, and every other stop along the API lifecycle. Without well-defined, and properly socialized API standards, you won’t be able to establish any sort of API governance that has any sort of impact. I know this sounds simple, but I know more API providers who do not have any kind of API design, or other guide for their operations, than I know API providers who have consistent guides to design, and other stops along their API lifecycle.

Many API providers are still learning about what consistent API design, deployment, and management looks like. In the API industry we need to figure out how to help folks begin establishing organizational-wide API design guides, and get them on the road towards being able to establish an API governance program–it is something we suck at currently. Once API design, then deployment and management practices get defined we can begin to realize some standard approaches to monitoring, testing, and measuring how effective API operations are. This is where organizations will begin to see the benefits of doing API governance, and it not just being a pipe dream. Something you can’t ever realize if you don’t start with the basics like establishing an API design guide for your group. Do you have an API design guide for your group?

While talking with Matt about their approach at Capital One, he asked if it was comparable to what else I’ve seen out there. I had to be honest. I’ve never come across someone who had established API design, deployment, and management practices. Were actively educating and training their staff. Then actually measuring the impact and performance of APIs, and the teams behind them. I know there are companies who are doing this, but since I tend to talk to more companies who are just getting started on their API journey, I’m not seeing anyone organization who is this advanced. Most companies I know do not even have an API design guide, let alone measuring the success of their API governance program. It is something I know a handful of companies would like to strive towards, but at the moment API governance is more talk than it is ever a reality.

If you are talking API governance at your organization, I’d love to learn more about what you are up to. No matter where you are at in your journey. I’m going to be mapping out what I’ve learned from Matt, and compare with I’ve learned from other organizations. I will be publishing it all as stories here on API Evangelist, but will also look to publish a guide and white papers on the subject, as I learn more. I’ve worked with some universities, government agencies, as well as companies on their API governance strategies. API governance is something that I know many API providers are working on, but Capital One was definitely the furthest along in their journey that I have come across to date. I’m stoked that they are willing to share their story, and don’t see it as their secret sauce, as it is something that doesn’t just need sharing, it is something we need leaders to step up and show everyone else how it can be done.


Just Waiting The GraphQL Assault Out

I was reading a story on GraphQL this weekend which I won’t be linking to or citing because that is what they want, and they do not deserve the attention, that was just (yet) another hating on REST post. As I’ve mentioned before, the GraphQL’s primary strength seems to be they have endless waves of bros who love to write blog posts hating on REST, and web APIs. This particular post shows it’s absurdity by stating that HTTP is just a bad idea, wait…uh what? Yeah, you know that thing we use for the entire web, apparently it’s just not a good idea when it comes to exchanging data. Ok, buddy.

When it comes to GraphQL, I’m still watching, learning, and will continue evaluating it as a tool in my API toolbox, but when it comes to the argument of GraphQL vs. Web APIs I will just be waiting out the current assault as I did with all the other haters. The link data haters ran out of steam. The hypermedia haters ran out of steam. The GraphQL haters will also run out steam. All of these technologies are viable tools in our API toolbox, but NONE of them are THE solution. These assaults on “what came before” is just a very tired tactic in the toolbox of startups–you hire young men, give them some cash (which doesn’t last for long), get them all wound up, and let them loose talking trash on the space, selling your warez.

GraphQL has many uses. It is not a replacement for web APIs. It is just one tool in our toolbox. If you are following the advice of any of these web API haters you will wake up in a couple of years with a significant amount of technical debt, and probably also be very busy chasing the next wave of technology be pushed by vendors. My advice is that all API providers learn about the web, gain several years of experience developing web APIs, learn about linked data, hypermedia, GraphQL, and even gRPC if you have some high performance, high volume needs. Don’t spend much time listening to the haters, as they really don’t deserve your attention. Eventually they will go away, find another job, and technological kool-aid to drink.

In my opinion, there is (almost) always a grain of usefulness with each wave of technology that comes along. The trick is cutting through the bullshit, tuning out the haters, and understanding what is real and what is not real when it comes to the vendor noise. You should not be adopting every trend that comes along, but you should be tuning into the conversation and learning. After you do this long enough you will begin to see the patterns and tricks used by folks trying to push their warez. Hating on whatever came before is just one of these tricks. This is why startups hire young, energetic, an usually male voices to lead this charge, as they have no sense of history, and truly believe what they are pushing. Your job as a technologist is to develop the experience necessary to know what is real, and what is not, and keep a cool head as the volume gets turned up on each technological assault.


Revisiting GraphQL As Part Of My API Toolbox

I’ve been reading and curating information on GraphQL as part of my regular research and monitoring of the API space for some time now. As part of this work, I wanted to take a moment and revisit my earlier thoughts about GraphQL, and see where I currently stand. Honestly, not much has changed for me, to move me in one direction or another regarding the popular approach to providing API access to data and content resources.

I still stand by my cautionary advice for GraphQL evangelist regarding not taking such an adversarial stance when it comes to the API approach, and I feel that GraphQL is a good addition to any API architect looking to have a robust and diverse API toolbox. Even with the regular drumbeat from GraphQL evangelists, and significant adoption like the Github GraphQL API I am not convinced it is the solution for all APIs and is a replacement for simple RESTful web API design.

My current position is that the loudest advocates for GraphQL aren’t looking at the holistic benefits of REST, and too wrapped in ideology, which is setting them up for similar challenges that linked data, hypermedia, and even early RESTafarian practitioners have faced. I think GraphQL excels when you have a well educated, known and savvy audience, who are focused on developed web and mobile applications–especially the latest breed of single page applications (SPA). I feel like in this environment GraphQL is going to rock it, and help API providers reduce friction for their consumers.

This is why I’m offering advice to GraphQL evangelists to turn down the anti-REST, and complete replacement/alternative for REST–it ain’t helping your cause and will backfire for you. You are better to off educating folks about the positive, and being honest about the negatives. I will keep studying GraphQL, understanding the impact it is making, and keeping an eye on important implementations. However, when it comes to writing about GraphQL you are going to see me continuing to hold back, just like I did when it came to hypermedia and linked data because I prefer not to be in the middle of ideological battles in the API space. I prefer showcasing the useful tools and approaches that are making a significant impact across a diverse range of API providers–not just echoing what is coming out of a handful of big providers, amplified by vendors and growth hackers looking for conversions.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.