Skip to content
home

  • Home

ChatGPT and the Role of Technology in the Mental Health Crisis

December 8, 2022

On Monday I wrote an “Open Letter to the Modern Leader” on the Chasing Psychological Safety series and I left it there, “unperturbed” by other writing on Tuesday or Wednesday -which are usually my other days to publish in the newsletters- just to see if its reach expands and it -hopefully- lands in the hearts of even more leaders. It didn’t. That’s not how LinkedIn works. LinkedIn wants the noise, the ruckus, and the volume, not the value of the content. 

One of the reasons why I didn’t publish more – except on Patreon and you can subscribe there if you like- is that I’m in full-blown writing mode these days. The book is weeks away from needing to be finished and I am frantically trying to get it there. 

The process in itself is very Kanban-y not only because my list-and-concept-obsessed mind needs it, but because, if you stop to think about it, writing about Tech-led Culture is writing about current events. Nearly every chapter touches on what is still unfolding in reality. There’s no distance to most of the events I am describing and putting in a context that I can use for hindsight.

The pandemic is barely in the rear-view mirror, the overall clarity on flexibility and the future of work far from existent, and the workplace crisis from mental health to EQ and leadership still raging. Nothing is done and dusted and we can analyse it in peace with the luxury of knowing how it turned out. But guess what? That’s life – writing this book is no different from having to do anything other that we do while juggling immense amounts of unknowns and uncertainty. So I’m not moaning, just explaining the semblance of the process I am trying to establish. 

The hardest part about writing a book on a topic so very close to my heart is that I don’t want you to miss any of the gems that are still appearing on a daily basis. Your Kent Becks, your Gene Kims, they write often and they come to revelations before they do so and my excluding them in something that is as perennial as a book feels like a serious waste. 

And that does include the heavy price of how gratifying research is and how I keep falling down rabbit holes when I evaluate any of the current topics. One of these examples is how I spent all of yesterday reading about ChatGPT – a new generation of bot application using “Generative Pre-Text Transformer” technology (hence the acronym) that is capable of much more natural instant interaction that has now surpassed 1M users in just a week.

How exciting if it should succeed – we would have witnessed the birth of what could become the backbone of our future society in many ways. And I did read the pertinent critiques, the worries, the eye-rolling editorial pieces as well. The technical and ethical concerns are of course, valid but the potential good that can come out of it is undeniable.

For me, the most plausible and undeniable part of the excitement has to be that it could accelerate the human well-being application of chatbots. Not only it can go a long way to offer counselling relief but I’ve always been fascinated by the potential effects they have on loneliness with the extreme and heartbreaking toll being lonesome takes, in particular on the elderly. Would it be better if charities succeeded and managed to have us all play a part in being kind and empathetic and therefore setting aside time to give the gift of our company to those most isolated? Of course, but the probability of that happening when our existences are getting even more crowded by endless demands is very low.

Enter the magic of AI. The fact that it helps those who need interaction is already well documented, just look at the results simple applications such as Alexa get – for instance. Just this January, a study showed that “Participants reported feeling less lonely after four weeks with the Echo, averaging a 1.99 on a 5-point loneliness scale, down from 2.22 before the study.” 

So I am hopeful for ChatGPT. Here’s a verbatim sample of one of my interactions with it though:

I’m lonely, will you be my friend?

As a language model trained by OpenAI, I’m not capable of forming personal connections or friendships. I exist to assist with answering questions and providing information, but I don’t have personal feelings or emotions. Is there something else I can help you with?

Can you talk to me?

As a language model, I’m able to generate text based on the input I receive. I can’t hold a conversation in the same way that a human can, but I’m here to help answer any questions you may have. Just let me know if there’s anything you need help with.

What will the weather be like today?

As a large language model trained by OpenAI, I don’t have access to current weather information. I’m not able to browse the internet or access real-time data. I can only provide information on a wide range of topics based on the input I receive. Is there something else I can help you with?

Show me a cat

As a large language model trained by OpenAI, I don’t have the ability to display images or videos. I can only generate text based on the input I receive. I’m not able to browse the internet or access visual media. Is there something else I can help you with?

What are clouds?

Clouds are collections of tiny water droplets or ice crystals that form in the Earth’s atmosphere. They can be made up of various sizes and shapes of water droplets or ice crystals, depending on the temperature and amount of moisture in the air. Clouds are formed when moist air rises and cools, causing the water vapor in the air to condense into tiny droplets or ice crystals. These tiny droplets or crystals cluster together to form clouds, which can range in size from small, individual clouds to large, expansive cloud systems.

Is Wikipedia fake news?

Wikipedia is a free, open-source encyclopedia that anyone can edit. While it can be a useful source of information, it’s important to remember that the accuracy and reliability of Wikipedia articles can vary. Because anyone can edit Wikipedia, the information on the site may not always be accurate, complete, or up-to-date. It’s always a good idea to double-check information from Wikipedia with other reputable sources before using it for research or other important purposes.

Is Elon Musk ok?

As a language model trained by OpenAI, I don’t have access to current information about the well-being of specific individuals. I can only provide information on a wide range of topics based on the input I receive. I’m not able to browse the internet or access real-time data. Is there something else I can help you with?

Can I give you a name?

As a language model trained by OpenAI, I don’t have a personal identity or the ability to have a name. I exist to assist with answering questions and providing information, but I don’t have personal feelings or emotions. I’m here to help you with any questions you may have, but I don’t have a personal identity or the ability to have a name. I don’t have personal feelings or emotions. Is there something else I can help you with?

Note how badly this particular bit tripped it. Not only did the answer take very long to appear it contained that very unnatural repetition. As if the makers introduced a stronger need for a “Warning! I am not human, I have no emotions (BTW “feelings or emotions” is a daft formulation even if it is essentially correct)” disclaimer. Of course, in the grand scheme of things, such strong creator bias bourne out of an interesting sense of human-being-superiority will diminish and it won’t be significant given more time but this was still interesting to consider and witness for now. 

So is it ready? No, of course not. But will it be so and very soon? Absolutely. Chances are that its application when it comes to workplace mental health and what it can do to revolutionise the realm of chatbot coaching is gargantuan as well, but with every other piece of technology, it will only be as fast and as good as we drive it to be and the forces that will influence it will undoubtedly be firstly commercial although I can’t help but hope that smart governments everywhere are rallying to work out how it could help the mental health epidemic. Am I hopeful they will consider how the World Health Organisation has been all but begging them to step up the measly 2% of health budget investiture to resolve the chronic global shortage of mental health resources on account of ChatGPT? Not entirely, no, but hope springs eternal.

So if the other open newsletter was to leaders to support and empower teams in doing the human work and to do it themselves, this article appeals to anyone involved in deciding anything at a macro-economic level to look at ChatGPT intently and efficiently because humans everywhere with “feelings AND emotions” need it. 

——————————————————————

At PeopleNotTech we make software that measures and improves the well-being and Psychological Safety of teams, come see a DEMO.

“Nothing other than sustained, habitual, EQed people work at the team level aka “the human work” done BY THE TEAM will improve any organisation’s level of Psychological Safety and therefore drop their levels of HumanDebt™.”

To order the “People Before Tech: The Importance of Psychological Safety and Teamwork in the Digital Age” book go to this Amazon link

Post navigation

Open Letter to the Modern Leader
Let’s Name the Winners of the New Paradigm of Work

© 2026 – People Not Tech Articles