Uncovering the True Value of GenAI Coding Assistants

Uncovering the True Value of GenAI Coding Assistants

Anyone who has used GenAI coding assistants knows they can be extremely helpful in some scenarios, but offer only incremental improvements in others. In my consulting role as GM of Engineering at Shine, I’ve worked with clients across various industries who are working to cut through the hype and better understand how they can deliver value.

In this blog I share a simple framework to explain where these tools excel, and where their limitations lie. It is a helpful lens when thinking about how these tools might impact your engineering organisation today, along with their future potential.

Game changer, or incremental improvement?

Over the past year, I’ve grappled with discrepancies between my personal and professional experiences with GenAI coding assistants.

On one hand, using a GenAI assistant for a side project was a game changer for me. As someone who started their career as a software engineer but now spends most of their time ‘off tools’, I quickly built a niche tool that would have been otherwise impossible in my spare time.

On the other hand, many senior engineers I respect and trust have described these tools as useful in some instances but far from transformative—more of an incremental improvement than a paradigm shift. Despite my enthusiasm, adoption within my professional environment has been tepid to date.

What explains this difference?

Two categories of problems: “How” vs. “What”

Broadly, the challenges software engineers face can be grouped into two categories:

  1. “How” problems
    These involve well-defined goals where the functionality and architectural solution are clear, but the path to implementing the code is unknown.
  2. “What” problems
    The challenge here lies in defining the desired functionality or solution. While the actual coding is straightforward, determining what needs to be done can be complex.

GenAI’s usefulness differs significantly between these two types of problems.

Where GenAI shines: “How” problems

GenAI coding assistants excel at “how” problems, where the engineer knows what needs to be done but not how to code it. Examples include:

  • Working with unfamiliar technologies, frameworks, or languages.
  • New integrations, patches, or upgrades.
  • Junior engineers tackling new challenges.
  • Experienced engineers stepping into unfamiliar territory (like a side project).

GenAI helps streamline coding tasks in these scenarios, often making the difference between a slow struggle and rapid success.

Below is a breakdown of what this type of work might look like. GenAI helps significantly with all of it:

Where GenAI struggles: “What” problems

In contrast, “what” problems are less about coding and more about decision-making:

  • Defining architectural solutions or complex functional behaviour.
  • Interpreting incomplete product specs or handling undocumented edge cases.
  • Coordinating across teams or resolving ambiguities in mission-critical systems.

For these tasks, GenAI offers little support beyond writing simple code snippets. The real work lies in context-specific problem-solving, which these tools are not equipped to handle.

A similar breakdown for this type of work might look like this. Note that GenAI only helps with the ‘Actually writing the code’ bit:

Most engineers encounter more “What” problems

Counterintuitively, most professional engineers—especially seniors—spend their time on “what” problems. Over time, they become deeply familiar with their codebase, frameworks, and tech stack, so “how” tasks become trivial.

Even when business requirements are clearly defined, engineers must make numerous “what” decisions: what tech stack to use, what code patterns, what degree of tradeoff between short term value and long term maintainability? These decisions often require nuanced judgment and context that current tools cannot provide effectively.

Custom models may be able to help engineers with ‘What’ problems, in specific scenarios 

Off-the-shelf GenAI assistants are general-purpose and limited to the information broadly available on the internet along with a small amount of context provided. To tackle ‘what’ problems more effectively, companies could explore customised Retrieval-Augmented Generation (RAG) systems such as the one outlined in Sonya Zhao’s recent blog. As outlined, these systems combine structured context with fine-tuned models to address domain specific challenges.

Conclusion

Generative AI coding assistants have found their niche in solving “How” problems, offering significant productivity boosts where coding expertise is the primary barrier. However, their utility diminishes for “What” problems, which demand a deep understanding of context, architecture, and functional requirements.

For most senior engineers, whose work skews toward “what” problems, GenAI is more of a helpful assistant than a transformative force. Recognising this distinction can help organisations set realistic expectations and identify areas where GenAI can add value. While its role in solving “what” problems remains limited, customised RAGs, and advancements in AI could eventually broaden its impact.

In the spirit of fostering innovation in this space, and knowledge sharing, I’d love to hear your thoughts, feedback, or examples of how you approach GenAI tools for software engineers. Does this framework resonate with your experiences?

Follow us on LinkedIn for more insights on all things software engineering!

No Comments

Leave a Reply

Discover more from Shine Solutions Group

Subscribe now to keep reading and get access to the full archive.

Continue reading