My experience attending CyberCon 2023

My experience attending CyberCon 2023

The Australian Cyber Conference 2023 (aka CyberCon) was held last week (17-19 Oct) and I had the privilege of attending for my first time. In this post I will share my experience over the 3 days.

It might be worth noting that my professional role is not necessarily cyber security oriented. I am a software developer who mainly plays around in the backend or with devops. However I think it’s fair to say that cyber security is deeply connected to those areas, so I am nonetheless a stakeholder, practitioner, and advocate of it.

First Impressions

Before day 1, I had looked through the agenda and picked out the sessions I would be interested in. Apart from that, I knew little of what to expect from CyberCon.

To my surprise, this conference was significantly larger (in venue, attendees, and content) than other dev conferences I’ve been to. It boasted famous keynote speakers such as Cathy Freeman, Chris Hadfield, Mikko Hypponen, Brian Cox, Taryn Brumfitt, and more, though I did find it odd that most of the keynote sessions were largely unrelated to cyber security (or IT for that matter).

The expo hall was packed with so many vendors marked with the trademark swag giveaway. It was funny to see the kind of gifts & prizes that were up for grabs! They certainly knew how to pander to a nerdy geeky sector.

The catering was top notch too. Alongside the wonderful lunches and afternoon tea, CyberCon also had multiple coffee carts, a juice bar, and even an ice-cream cart!

Well enough talk about the fun stuff, let’s get down to business. I tried to spread out my session choices to include a range of topics: things I was familiar with and things I was not, topics that were more technical and those which were not. I will now share about three sessions which left a lasting imprint in my mind.

1. Mattia Rossi: “Bridging the gap between risk and controls”

In his words, the gap Mattia identified stems from “the expectation Business has from security risks compared to what Security is providing by ensuring security controls are met”. This is a three-part gap that comes from relevance, accuracy, and timeliness.

Relevance

The Business expects risks raised by Security to be relevant to their area, yet we often find Business lukewarm towards security reports and risk assessments. Why so? Mattia suggests that Security often raises risks that are not relevant because a framework/control set is blindly followed. All projects/systems have a specific environment, a specific technical context, use specific components/libraries, but the security policies and frameworks that are applied and enforced are rarely customised. What comes out of these are un-customised reports which are full of risks without context, risks with a rating that has neither been reviewed nor confirmed by Security, and risks that are inherited from sub-dependencies instead of the solution itself. The act of running these security checks and controls ends up only being a tick-box exercise. Mattia drilled home the point of security vs compliance. Adhering to Essential Eight, ISO-27001, or whatever else, may sound like Security has done its job, but often in reality, it merely ticks a compliance check-box, leaving the system not necessarily well secured.

I’ve experienced this myself where a client’s security team gave me a set of restrictions that had been enforced for all of their on-prem VMs, I was told to lock my VM down following the same set of rules. But the difference was that the project was an AWS cloud-based solution, and they did not consider the differences between an EC2 instance running in AWS vs an on-prem VM. So I was almost forced to configure password rotation and password complexity rules on an AMI where the only “user” was in fact an AWS Lambda which would execute 2 commands via ssh. Thankfully after we understood the context together, I did not have to go down that road.

Accuracy

Mattia again reminded us how security practitioners often perform framework-based tick box exercises which only provide information that a control has been fulfilled. It does not provide meaning as to why or how bad the risk is/was. Business is then lacking information on whether prioritising resources to fix the remaining risks in the framework is worth it. Business wants answers to:

  • How bad is it?
  • How much of the organisation will need to change to remove/reduce the risk?
  • How much work is required to remediate the risk?
  • What remediation options are there?
  • Is the business consequence immediate or delayed?

Security practitioners often use this 5×5 risk assessment matrix to present a case for the urgency of a fix:

But Mattia argues that the matrix provides little value unless the severity of the risk is contextualised into actual business impact.

Timeliness

Timeliness of risks (or exemptions, or assessments) is most crucial in xOps environments. Nowadays, many security tooling are added to CI/CD pipelines to improve automation (i.e. SAST, DAST, container scanning, API scanning, config scanning, etc), yet these tools often act only as enforcement points (a gate). They are not tailored to risks (relevance, accuracy). Mattia argues that you cannot maintain a manual exemption process within an automated enforcement process, it would only lead to being stuck in a manual exemption loop. Instead, the security enforcements are almost always disabled.

How to fix it

How do we bridge the gap? Mattia provided a few approaches, while recognising that there is no definitive “fix” for the problem.

  1. Provide guidance for any selected security framework which clearly points out the context under which any controls apply or don’t apply. Key point being not to just pick frameworks or tools and enforce them. It all needs to be contextual.
  2. Ensure that whenever exemptions or risks are recorded, that the context is recorded as well. Too often, current exemptions set the precedence for future exemptions, so provide the context to avoid having exemptions misapplied.
  3. Record risks in a way so that dependencies between risks can be created to inform about the actual business impact. Build/use a risk hierarchy (knowing your inventory first is a must!).

I enjoyed this session from Mattia largely because the gap that he identified really reflected several of my painful experiences with Security. It was comforting to know that there are people out there in the Cyber Security space reflecting, investigating, and pushing for an overall smoother and more secure approach of implementing “security”.

2. Mikko Hyppönen: “Safety and security of Artificial Intelligence”

Mikko, a Finnish computer security expert, speaker, and author, gave a rather chilling talk about AI. Though speculative and high level, it presented to me some jarring thoughts which I hadn’t sat down to ponder.

Given that AI is no new concept (the term was coined in the 1950s), Mikko first quantified why AI is now booming in the last few years. 3 factors have allowed AI to flourish in its current form:

  1. Internet – turned everything into data, human knowledge is digitised
  2. Cloud – capability to store the world’s data in a very accessible way
  3. Power – processing power, silicon chips, etc

These factors gave rise to the immense tools such as the Generative Pre-trained Transformers (GPT) AI models.

A few thoughts and comments from Mikko stuck out to me:

  • Is generative AI ok? While something generated is technically something “new” and not a copy, is it still “right” to use the source material as its base data? For example, if I generated a song in the style of The Beatles, using AI to imitate their voice and sound, but attaching it to other hosts of lyrics, would I need The Beatles’ consent? Would they have a say in what I’m allowed to generate that mimics their characteristic vocals and sound?
  • OpenAI is not actually open source. It started out as open source but it no longer is. Its organisational structure is also unconventional because it tries to walk the line of ethically pioneering the future of technology (and perhaps life itself) without being sucked into making decisions for profit or for investors. While highly unconventional, perhaps there are good reasons for it, and Mikko was divided on the thought.
  • Follow on from OpenAI’s closed structure, the alternative were it not to take that path, could be a GPT AI that is unrestricted, unbiased, and well, open. Examples of that are WormGPT which has no problems giving out recipes for drugs or bombs. After all, it is simply providing information out of the vast data that it had been trained on.
  • Furthermore on the topic of ethics, the testers at OpenAI have quite the task at hand. They’ve shared an instance of a test of GPT-4 where it managed to pay a human to complete a captcha test on its behalf, lying to the human in the process by claiming that it was not a robot (read an article about it here which also has a link to GPT-4’s actual technical report (see page 55) that also includes several other risky and questionable scenarios on pages 48-51). This is only one of a few test cases released in GPT-4’s documentation/report. What other thousand cases are there to consider? And who’s to say that the folks at OpenAI are appropriately re-training their AI to make the “right” decisions? Will there be any kind of governance around this? Who gets to say what’s “right” or not?

Overall, the tone of Mikko’s presentation was ominous and downright fearful. He himself confessed that he dislikes the idea that in the near future, AI would be better than humans at far too many things, even things like writing poems. But on one brighter note, Mikko drew parallels of the present AI revolution with the internet revolution of the early 1990s. Both are/were projected to entirely change the way we live, talk, act, and yet from his point of view, he would argue that the internet revolution brought about far more good than harm. Amidst internet scams, cyber bullying, loss of jobs, the presence of the internet could still be deemed as a step forward for humanity. Likewise, Mikko has the same hope for AI, that as dangerous as AI can be, he thinks that now is a pivotal age to develop AI in a “right” way so that when the AI revolution plays itself out, humanity would end up in a better place.

3. Matt Berry: “Phishing-as-a-service is now a thing”

I attended a few sessions around the topic of hacking, phishing, red team/blue team, etc. I don’t usually come across these areas in my day-to-day as a backend dev, so it was nice to be more informed and up to date on what threats and strategies are out there. I’ve chosen just one of the sessions to share here, though there were quite a few others which were of a similar nature.

Matt began by sharing the reality that phishing has evolved to the point where it is now marketable as a service. Here is a screenshot of a Phishing-as-a-Service (PHaaS) called Caffeine that specifically targets Office365. Note the layout looks awfully similar to any other SaaS offering in the market, it offers multiple payment tiers with layered benefits, including “Unlimited Support”! All you need is to have an email to sign up!

PHaaS – Caffeine

Another PHaaS called EvilProxy even came with its own demo video. The video shows off the GUI of the phishing tool, the configurations it supports, and a live demo of visiting a generated link that impersonates the standard Google login page, MFA included! It even boasts bot-detection which redirects the user to the legit Google login if the website recognises the user as a bot.

Best yet, another PHaaS called W3LL Panel which even includes a 10% bonus cut for any referrals you make!

PHaaS – W3LL Panel

It was funny yet eerie to see such familiar features all wrapped up in criminal tools. Matt walked through in more detail the scary capabilities of these tools, how it oh-so-deceptively performs a proxied login to steal the victim’s session cookie.

Given the threat, Matt raised several mitigation and prevention strategies. Firstly, and perhaps already most widely adopted, user awareness training. I’ve worked at multiple client sites, many of which conducted discreet internal email phishing campaigns to practically inform and train all staff in recognising phishing scams. That was a bit of relief to know that the first strategy he had was something many of us are very aware of and have actively participated in to skill up. That being said, ChatGPT and PHaaS are certainly providing the potential for phishing emails to look much more convincing. Matt showed a humorous snippet of a chat on a PHaaS Telegram chat room where malicious scammers were upskilling to use ChatGPT to craft better emails.

Given the calibre (or lack thereof) of this scammer, it might bring a funny sigh of relief as to the quality of phishing emails we have come to expect, yet this image also signals that soon, gone will be the days where phishing emails are easily identified due to being riddled with typos and poor formatting. Instead, all of us would have to be diligent enough to recognise phishing emails that are much more deceptive. Phishing training campaigns will have to focus on altering staff behaviour, not to click on any link or attachment that has a semblance of uncertainty. Organisations should start blocking typical attack paths such as blocking HTML attachments entirely (how often do we send HTML attachments anyway?).

Another prevention strategy Matt introduced was a behavioural science approach. Behavioural AI are now capable and simple enough to detect these threats as an anomaly. A behavioural model can be built within an organisation to determine its staff’s “normal” identity, behaviour, language, intent, and thus filter out a lot of phishing attempts.

Wrapping Up

All up, CyberCon 2023 was a treat to attend.

A key takeaway for me personally was the importance of awareness. Specifically:

  • Awareness of the current and future security landscape, especially given the presence and prevalence of all things AI
  • Awareness of evolving cyber threats
  • Awareness of my own role as a developer in the IT industry and what my part is to play in creating a safe cyber space for myself and all around me

The days of developers being able to ignore security or regard it as somebody else’s job are over. Cyber-security should now be a key consideration in everything we do, and the first step is that we are always aware of what is happening around us in the security landscape.

Tags:
,
jimmy.ting@shinesolutions.com

I'm a Senior Software Developer(backend, devops) at Shine Solutions in Melbourne, Australia

No Comments

Leave a Reply