AI: An Uneasy Alliance

“People keep telling us ‘life is not about the destination, its about the journey’. But when we think about AI we only think about the destination, and its remarkable ability to write the book, paint the painting, solve the problem. But we forget the importance of doing the work yourself. And I think in our modern day and age, we have underrepresented the value of struggle.”

- Simon Sinek, on The Diary Of A CEO


In recent months whenever I sit down to write the newsletter, I get a cheeky flash of temptation. What if I just use ChatGPT…? Everyone seems to be using it, and I can’t shy away from that. Not to mention it can certainly churn a blog post out in mere seconds - albeit with clunky wording - and no one would be the wiser, right? I feel ashamed to say that the pull to use it is very strong indeed. Not only can it outmatch me in productivity and efficiency on any given day, it is - by its nature - soon going to be smarter than myself or any human (if it isn’t already). It is both amazing, and terrifying.

Now I admit, I have used ChatGPT before, to help pull together resources and to help clarify my thinking on a topic. It truly is a marvel that a literature search for example, which would usually take me hours and hours, can be summarised for me in the blink of an eye. Hot damn! As a researcher and psychologist though, I pride myself on being able to engage critically with the world, and a vital part of the critical thinking process is the hunt for information. Its the sifting through of papers, the scribbling of notes, the conversations with others. Chatbots eliminate this step, diminishing our analytical thinking, and leaving us poorer for it.

For younger generations, my concern is even greater. I was fortunate to grow up in an era where critical thinking was drilled into you - especially in higher education. As a result I am able to navigate the emergence of AI with more caution than children, teenagers, and adults who were never given the opportunity (or the access) to develop deep levels of discernment. And I am keenly aware that I have the privilege to deploy these skills to question whether I should even use AI at all. Today, these critical thinking skills are more important than ever - and are teachable. If we want to solve the challenges of our time, we need individuals who can engage in nuanced thought, who can hold two ideas at once, who can have informed discussions and debates. Yet these skills are being attacked on all fronts: from widely dispersed fake news and conspiracy theories, to online social worlds full of distractions, and now to technology that literally does our thinking for us.


As AI becomes increasingly sophisticated, more and more individuals - from students, to everyday workers - are turning to it to help them generate ideas, write essays, and produce work outputs. In fact, a study by the Higher Education Policy Institute (2025) found that as many as 88% of UK university students use ChatGPT to write their assignments. To better understand the complex relationships students are building with ChatGPT, a recent guardian article by writer Jeremy Ettinghausen is particularly illuminating. Ettinghausen analysed the ChatGPT logs of 3 undergraduates - reviewing 12,000 questions over a period of 18 months. Whilst a large proportion of queries were work-related, the 3 students used ChatGPT for all manner of questions, on a breadth of topics. It seems that, as Ettinghausen noted “once ChatGPT has proven helpful in one aspect of life, it quickly becomes a go-to for other needs and challenges.” For example, one student used ChatGPT primarily as their therapist; asking for mental health support and dating advice. Another used it for health and nutrition advice for an upcoming sporting event. In the past Google was the font of all online knowledge, but it still required individuals to explore, sift through, and curate the options provided. In this new era of AI, individuals need to do very little to get a fully formed piece of work ready for submission.

As Ettinghausen discovered, more and more young people are using AI to access mental health support. Now I am sure that there can be real value in having some form of AI chatbot that can perhaps offer additional comfort or support to those struggling with mental health crises, particularly when many of us rely on health-care systems that are overburdened. And recent evidence would add weight to this claim. In a study by Pareek and Jain (2025), students were exposed to six conversation excerpts (three AI-generated, three human-generated), before rating the therapeutic conversations for empathy, professionalism, and factual correctness. The authors found that when the students were unaware of whether they are speaking with a human or a chatbot, they rated the chatbots as more empathetic, more professional, and more factually correct than a real human therapist. Despite some promise, expectations around the AI therapist / client relationship are murky at best, and could result in harm (Khawaja & Bélisle-Pipon, 2023). As such significant ethical risks abound.

Outside of therapy, we are also seeing an alarming uptick in individuals using AI to fulfil the basic need for connection. In a spooky foreshadowing of the 2013 Spike Jonze film ‘Her’ (incidentally set in the year 2025), individuals are forming deeply personal relationships with their chatbots, and even marrying them. We are in uncharted territory here, where the legal, ethical and moral ramifications are unknown. This trend tells us perhaps all we need to know about the world today: we tread the line between a hyper-connected online fantasy life, and a disconnected offline life. When lonely or isolated individuals are resorting to finding love and companionship through AI, as a society, we have failed them. We have failed to give them the love and community they deserve, a place to feel safe and accepted.


It seems that AI usage is something that none of us can avoid for long, and recent conversations with friends have highlighted this very concern, as more and more organisations are pushing their employees to become AI capable. For fear of being left behind, companies are charging head long into the brave new AI world, one with limited regulations or boundaries. Not surprisingly, employees feel a sense of urgency to catch up themselves - to learn to build AI agents so they can be perceived as more productive and more efficient. And these concerns are not for nothing; if companies and industries do not pivot and embrace AI, they might very well fail as a consequence.

The key challenge then, is not necessarily AI usage in-and-of-itself, but rather the lack of deep thinking around how to use it, when to use it, and for what. As more of us become AI literate, it becomes increasingly urgent that as a society we consider two questions. Firstly what does AI mean for the nature and future of ‘work’ as we know it? According to the AI 2027 report, in the most positive scenario, AI could transform the structure of our society, creating a fully automated economy and freeing up time for us all to pursue more meaningful and human goals (subject to a complete overhaul of society in favour of more socialist governance). This might look like dedicating more time to being creative, deepening personal relationships, and building equitable and sustainable communities. In this trajectory we are looking at something akin to a utopia. On the flip side, in the most negative of scenarios, a lack of regulation and a failure on our part to correct for AI’s ability to lie, we might be looking at a real existential catastrophe; where we are manipulated by entities whose intelligence far outweighs ours.

Secondly - alongside the need to reflect on the nature of work - we must recast the boundaries of what it means to be human. We must consider where the essence of humanity lies. Perhaps it is found in pursuits that imbue our lives with meaning: such as art, music, community and culture. Perhaps it’s in the small moments that culminate in expressions of joy, love, or sadness. Because no matter how advanced it is, AI’s imitation of these unique human experiences are just that - imitations. As such, they are utterly meaningless. AI art cannot tell me anything about the human experience. AI’s description of how an emotion should feel - is empty. It cannot generate meaning, it cannot experience empathy, it will never live with the knowledge that its time on this world is finite, and precious.

Because this is what it means to be human.


References

Freeman, J. (2025). Student Generative AI Survey 2025. Higher Education Policy Institute. Retrieved, 4th August 2025, from: https://www.hepi.ac.uk/2025/02/26/student-generative-ai-survey-2025/

Khawaja, Z., & Bélisle-Pipon, J. C. (2023). Your robot therapist is not your therapist: understanding the role of AI-powered mental health chatbots. Frontiers in Digital Health, 5, 1278186.

Kokotajlo, D., Alexander, S., Larsen, T., Lifland, E., & Dean, R. (2025). AI 2027. Retrieved, 4th August 2025, from: https://ai-2027.com/

Pareek, S., & Jain, G. (2025). Perceived Authorship and Conversational Evaluations: A Study on AI-Generated vs. Human Therapist Dialogue. Journal of Technology in Behavioral Science, 1-15.

Next
Next

Give In To Gratitude