The real threat isn’t AI—it’s the discourse surrounding it.


For a while now, I’ve struggled to hold uncomfortable feelings about AI in tension with the reality of AI within our industry and world today. Thanks to a post by Steve, I feel compelled to try and articulate my current stance on the subject.

Do not fear, this isn’t a hostile take-down of why you’re not smart enough to see the value in AI. Instead, I’ll try to provide balanced reasoning on how we can improve the state of conversation around it.

tl;dr:

It’s possible to believe AI is one of the most transformative tools of our time and that most of what’s being sold as AI is smoke and mirrors, or worse, complete bollocks. Both can be true.

A familiar narrative has formed around AI in our industry, one that continues to divide creators in our space. Like Steve, I find myself disappointed, not in AI itself, but in how we talk about it. The snake oil is easy to spot if you’re close to the tech, but harder for the average person wading through a timeline of grifters on LinkedIn.

AI is now a well established resident in the tower block of technology, and will leave a lasting legacy on software engineering for years to come. As always, there is a disconnect between the marketing hype and the technical reality that we as technologists are faced with.

Signal-to-noise ratio

One of the things that detracts from good discourse on AI is the involuntary reaction a lot of companies have had to feeling the need to inject AI into every part of their product. Functionality that existed before the rise of AI re-emerges as new functionality now with 100% more AI. A cunning move played by marketeers to demonstrate speed-to-market on meaningful functionality, but sadly a move which is fooling no one.

Dunking on people or companies promoting AI may occasionally have merit, when there is clear exploitation or behaviour from bad actors, but it’s important that we don’t lose track of potentially relevant changes going on behind the scenes.

There are plenty of skilled teams actively trying to build meaningful products that solve real world problems, but it’s hard to focus solely on this when aspects of our industry are also being threatened. There is very much a feeling that anyone trying to do anything with AI has bad intentions, and that’s just not true. There are definitely bad actors (and bad organisations) to blame for misuse, though, and it’s them that should be on the receiving end of the criticism over individual contributors.

There is also the hugely important debate of creativity in our industry, and how there is pride and enjoyment out of the act of creating vs shipping lines of code (which in case you’re new here, is never a good metric of productivity or value).

Engaging productively

You can of course continue to ignore AI wholesale, I wouldn’t blame you for that. That approach may, however, leave you open to blind spots as the future of our ecosystem takes shape. Whether you like it or not, it has already infiltrated our toolbox, showing up in pretty much every software engineering tool there is to “help” you ship code. As always with tool diversification, there is a learning curve to figure out what is actually a useful augmentation of your creative process vs a complete blocker to getting things done.

So how can we think and talk differently about what’s going on with AI?

A good start is to become more informed about what’s going on. Challenge nonsense when you see it, albeit privately at first. Challenging views that appear too good to be true or present something as magical through your own research can only better your stance on something.

Celebrate those in the community trying to harness a sustainable and effective future for AI. There are teams of people out there trying to make AI greener and more ethical than it is today, and those people need your support. Working against harmful strategies by bad actors can be hard work. Just like we’re happy to support open source projects that enrich how we build things, it’s important we support progressive AI work, too. Without those people, we’ll be left with those who want to financially benefit from its existence as well as it’s ability to exploit people and society as a whole.

Don’t judge people who are not jumping on the hype. There are a whole host of reasons why people may not be including AI in their process or product today. Maybe the value of adding such a tool is currently unclear, and there’s work to be done to validate that before taking that step. It’s always hard to remove functionality that isn’t valuable or struggles to gain traction, so a considered approach is often sensible. Equally, the mental load of trying to continually learn something that is fast evolving can be overwhelming in times of extreme busyness.

Also, there are ethical, compliance and technical reasons to carefully consider adding AI to your stack. At Genio, we build tools to support learners and unlock better learning. A huge portion of those learners have a registered disability or neurodiversity, and with this we carry the huge responsibility to do what is right for them. We never want to be in a position where we’re optimising for speed-to-market over introducing things that genuinely support that mission.

Finally, try it. It’s amazing how many people I’ve spoken to who are sceptical about AI, only to come clean to the fact they’ve never used any of it and formed an opinion solely off things they’ve read on social media. You should share your honest experiences of doing this. Found it sucked? Great! What sucked? Why? What can we learn from this?

As a stand-alone entity, AI isn’t a miracle or evil. It’s a tool. And like any tool, its impact will depend on how thoughtfully we use it, and how honestly we talk about it.

The invisible impact

I believe AI can be (and is) transformative. But right now, we’re shelving difficult problems that need solving. We’re burning compute, ignoring the human impact, and calling it progress.

We don’t get the benefits of AI for free. Every “wow” moment comes with a cost: energy, carbon, or cognitive overhead, and this gets overlooked far too frequently.

There is a very serious environmental reality of AI that needs to feature in your analysis of tooling. Massive data centre energy use, water consumption for cooling, carbon footprint of training/fine-tuning models — these have fast become acceptable downsides to interacting with AI.

Then there’s the threat to actual learning. If Google put the world at your fingertips, AI makes you believe you were actually an expert in a topic all along. The UX of chat interfaces combined with convincing responses can create a sense of overconfidence that detracts from your willingness to learn. This is of course, by design, and you should look at more diverse tooling options that may try to solve similar problems in ways that don’t detract from the true discovery of, and absorption of, knowledge.

Friction is important in learning and in life. We need more friction in the right places, and there is a growing concern that distilling the accrual of knowledge down to how well you can author an AI prompt is not making the process easier but deferring the challenge to later in life where that knowledge is needed. At worst, education could quickly devolve into awarding students for tool fluency over actually earning credentials through expert articulation of a specific field of knowledge.

Accessibility also becomes a really big problem. Privileged access to AI tools like ChatGPT intensifies the digital divide. Even if users can access these tools, the knowledge on how to interact with them properly to benefit from them remains a class-based problem.

But these are not specifically AI problems, they’re system design problems. These issues are design choices, some of which are lazily worked around. Having a hint in the interface that says “always check your answers, this data might be incorrect” doesn’t mean that people will do that. The convenience almost always outweighs the risk of things not being factually correct, and convenience drives usage. None of these trade-offs are inevitable. But they are the result of how we’ve chosen to build, deploy, and monetise AI today.

I am incredibly optimistic, though. Along the same timeline as the diffusion of innovations, these aspects will improve. Hard drives were once the size of buses, and now that form factor is almost invisible. Innovation drives refinement and optimisation. We will better the environmental impact of AI just as we’ll understand more about how to harness it’s ability and better humanity with it, but the road to that will be messy and littered with disinformation.

Improving system design

To ensure we contribute meaningfully to the future of AI, we need to understand what “better” looks like. A few of the ways in which I think about this are:

The way forward isn’t to stop building AI. It’s to build like the cost matters. Like our systems should be sustainable, equitable, and useful, not just impressive.

Build with integrity

Regardless of your role, whether you’re an engineer building products or in leadership trying to navigate this new normal, it is crucial that you form your own opinion. There is no shame in updating that opinion as new information comes to light. Accepting opinions can change is human, and that is an important part of personal and professional growth.

Finally, ask questions. Understanding why certain models are good at one thing and not another will uncover interesting architectural insights. It may also help highlight when certain vendors have more skin in the game than others.

We need to make asking “at what cost?” a normal part of product development. AI can be part of a better future, but only if we treat its impact as part of the build.

Go back