13 Nov 2024 ApiDays Melbourne 2024
A couple of weeks ago I attended ApiDays Australia 2024 with my colleagues Abhishek Rana and Prodip Guha Roy.
Though working across different client sites, all three of us are software developers who have been primarily building and maintaining APIs for most of 2024, so when ApiDays came around, it seemed perfectly relevant for us to attend to keep up to date with the latest trends/tools, and to hear and learn from fellow API people.
In this post each of us will share some of our general thoughts and insights gathered from the conference, and some memorable sessions from each of us.

Jimmy: API buzzwords of the day
To my surprise, the sessions throughout ApiDays were only each 25 minutes long (unlike most other conferences I’ve attended, where sessions usually run for 40-50 minutes). While this meant that we could cover a broad range of topics, I felt that the speakers could not go into sufficient depth. For us, as fully hands-on devs, they fell short of satisfying our technical cravings. It also meant a lot of moving between rooms to get to the sessions we were interested in.
Given that was the case, we started to notice certain buzzwords coming up repeatedly. This lead us to believe that these were symptoms felt throughout the API landscape. I think these are worth sharing. I resonate with them personally and can testify to their prevalence across most of my clients.
API Sprawl
I heard or saw the term “API Sprawl” mentioned at least 4 times by separate speakers at the conference.
A helpful definition from Akamai was shared by one of the speakers: “API sprawl is a term used to describe the uncontrolled proliferation of APIs within an organization.” The quote goes on to say that “This can result in a large number of APIs that are not properly managed, documented, or secured.”
This was simple enough to understand and observe, it didn’t take me long to nod and go “yup, I’ve been there”. As a consultant, I’ve often been brought on to work in a team or department of an organization to help build some APIs for various valid business use cases. I’d come in, build some APIs, test, deploy, document, handover, and that’s the end of my engagement. What I don’t often get exposed to is the broader organisation-wide governance over APIs. I can imagine that I’m not the first consultant to come through their doors to “build some APIs” and then finish up. Is someone with a higher vantage point looking at the sprawl of APIs to do any kind of consolidation or management? I hope so.
API sprawl might seem like a prevalent problem, but is it all bad? Part of why API sprawl is even an issue comes down to the reality that organisations are now able to deploy fast! Through the use of microservice patterns, cloud infrastructure offerings, and elegant CICD pipelines, spinning up an API is quick and trivial. Perhaps we’ve reached the point where focus and governance needs more attention.
With a sprawl of APIs across an organization, security also becomes a bigger concern. The still-recent Optus data breach serves as an example of where an unauthenticated and unused API was exploited to leak PII of customers. A lot can be said about this data breach, but surely part of it came down to an unmanaged suite of APIs. This unfortunate one went unchecked, unnoticed, and unguarded. As one of the speakers put it, “A data breach is only the symptom. The sprawl is the cause.” You can’t protect what you can’t see. This leads me to the next buzzword.
API Inventory
“You can’t protect what you can’t see” is the essence of this next buzzword. Several sponsors and speakers harped on about the lack of an API inventory, asking questions like:
- What APIs does your organisation have?
- How are they accessed?
- What data flows via those APIs?
- What is the security posture of those APIs?
All good questions, but surprisingly hard to answer if controls are not already in place. What would it take for you and your team to provide these answers? If it takes days of surfing through network logs, app logs, code repositories, and documentation to gather a view of your current state, then it’s arguable that you’re in danger. You don’t seem to be sufficiently aware of what it is you’re operating and protecting.
If the problem is well defined and easily identified, what solution is there? All I retained were more buzzwords such as:
- Automated API discovery (via crawlers going through API traffic + codebase)
- API platforms
- Discoverability of APIs (think OpenAPI specifications, tagging, categorisation, etc)
- And more…
API Gateway + WAF is not enough
Though not discouraging the use of API gateways or WAFs (web application firewalls), a myriad of speakers insisted that I’d be fooling myself if I thought my APIs were secure simply because I had them sitting behind an API gateway and/or a WAF.
An API gateway, however useful, is just a policy enforcement point. It boasts limited threat detection and a lack of intelligence. It is predefined, so it lacks any ability to adapt on its own.
WAFs on the other hand work well if the attackers play by your rule book, but they fall short. They lack context and are unable to help with business logic violations.
For example, I could rightfully access an API on https://fake.site/users/1234 if my user ID is 1234. But what if I were to try accessing https://fake.site/users/12345, and https://fake.site/users/99999? What would stop me? Neither the gateway nor the WAF would be sufficient to detect that I was attempting to enumerate user IDs to gain unauthorized access to another user. Rate limiting might prevent me from retrieving a lot of user info in a short burst, but 10, or 100, or 1000 users per day, could easily fly under most rate limiting thresholds. The right solution is to have appropriate authorization measures in place to reject my attempts to make such requests. It would be nice if we could also be alerted to attempts at such unauthorized access.
Several speakers proposed solutions/tools that revolve around providing additional context to detect deviations from the norm. Taking into account historical data/logs, modern API security tools can identify abnormal patterns of API access which could be malicious. This is where AI (speaking of buzzwords) is starting to be used with promising results.
Abhishek: Security in the API Lifecycle
Attending the conference for the first time as an API developer was an exciting experience. It offered fresh insights into topics like API sprawl, API inventory, and federated APIs—concepts I had encountered but hadn’t deeply explored. While the event was informative, I found it leaned more toward showcasing API tools and software in the market. It felt more beneficial to an API tester than covering in-depth API development practices.
A highlight was a session on “Security in the API Lifecycle” by Akalanka and Sudeep from Deloitte. It addressed one of the most critical challenges in our field: choosing the right OAuth 2.0 flow to secure APIs. This session outlined various access control techniques and clarified when to use each flow based on the type of client application.
1. Client Credentials Grant Flow
The client app in this flow must be highly trusted, as it requests access on its own behalf. In this scenario, the resources are owned by the client, eliminating the need for end-user authorization. This flow is ideal for Machine-to-Machine Authorization.
Using Client Credentials Grant Flow is appropriate when there are no end-users involved, and the client app is highly trusted, such as in interactions between internal microservices.
Never to be used when end-users are involved, or for public web/mobile applications, due to the security risk of exposing credentials stored in the front end.

2. Authorization Code Flow
This flow is suitable for client applications that can securely manage credentials. It is particularly effective in scenarios where there is interaction between client apps and user agents, such as web or mobile browsers. This flow is best suited for high-security requirements, as the token exchange process is distinct from the authorization step.
The Authorization Code Flow is appropriate when both the user and the client app need to be authenticated and authorized.
It is advisable to avoid using this flow for public-facing applications or single-page applications (SPAs)

3. Authorization Code Flow with PKCE Extention (Proof Key for Code Exchange)
This enhances security by requiring an additional dynamically-generated code (the code verifier) during the exchange for an access token, mitigating the risk of authorization code interception.
This way, even if the attacker gets his hands on the Authorization Code, they will not be able to get an access token without the code verifier.
Use For mobile and single-page applications (SPA) where a client secret cannot be securely stored.
Use For public clients or scenarios where additional protection is needed against interception and replay attacks.
4. Password Grant Flow
This is the simplest of all OAuth 2.0 flows and was specifically designed for confidential clients (server-side applications). In this flow, the client app has direct access to user credentials, meaning that users enter their login information directly into the client app. The app then uses these credentials to obtain an access token from the authorization server.
Due to these risks, the Password Grant flow is not recommended for public clients (like mobile or SPA applications). It is now largely deprecated in favour of safer authentication methods.
5. Implicit Flow (Legacy)
This was designed for client-side applications that run in the user’s browser (JavaScript) or as native apps. The Implicit Flow delivers the access token directly to the client via the browser redirect (URL fragment), which makes it vulnerable to interception or exposure in browser history, logs, or even cross-site scripting (XSS) attacks.
Use Implicit Flow only when the Authorization server is legacy and cannot support Cross Origin Requests (CORS). Upgrade to Authorization Code Flow with PKCE if possible.
Prodip: Federated API Management
ApiDays was the first conference I have ever attended. Not knowing what to expect, I was pleasantly surprised by the variety of topics covered during the two-day conference. After attending over 20 sessions, I can broadly categorise the presentations into three main areas: AI (the current buzzword), API Security, and API Management. While most sessions were not highly technical, they provided insights into the current API landscape and highlighted various tools and software companies are using to tackle real-world problems.
Although many sessions focused on API management and governance, one that particularly caught my attention was on Federated API Management, delivered by Markus Müller, the Global Field CTO of Boomi.
Market research from Boomi shows that:
- On average, an organisation manages over 600 APIs.
- 48% of surveyed organisations reported API sprawl as their top challenge.
- 68% of surveyed organisations had exposed shadow APIs.
- 57% of surveyed organisations were using API gateways from more than one vendor, with some using more than four.
Although there are various ways to manage and maintain APIs, Markus Müller made a strong case for Federated API Management. Federated API Management is a method that aggregates all of these functionalities. This model consists of multiple distributed “data planes” that operate independently, with a single “control plane” that manages all the data planes centrally.

For instance, an organisation can have multiple gateways built to serve specific purposes across different cloud services. Each gateway functions as a data plane managed from a centralised application, making API management and governance tasks—like API discovery, key creation, and key revocation—much easier.

Overall, ApiDays Melbourne 2024 had been a treat We do feel like we walked away from the conference more equipped with new tools, thoroughly aware of the API landscape, oh and with lots of swag! Socks, t-shirts, a headphone case, stickers, water flask, keychain…
No Comments