We are gen z – and ai is our future. Will that be good or bad? , Sumaiya Motara, Rukanah Mogra, Frances Briggs, Saranka Maheswan, Iman Khan and Nimrah Tariq

by Marcelo Moreira


What’s fact, what’s fiction? Will we know?

Sumaiya Motara

Freelance journalist based in Preston, where she works in broadcasting and local democracy reporting

An older family member recently showed me a video on Facebook. I pressed play and saw Donald Trump accusing India of violating the ceasefire agreement with Pakistan. If it weren’t so out of character, I would have been fooled too. After cross-referencing the video with news sources, it became clear to me that Trump had been a victim of AI false imaging. I explained this but my family member refused to believe me, insisting that it was real because it looked real. If I hadn’t been there to dissuade them, they would have forwarded it to 30 people.

On another occasion, a video surfaced on my TikTok homepage. It showed male migrants climbing off a boat, vlogging their arrival in the UK. “This dangerous journey, we survived it,” says one. “Now to the five-star Marriott hotel.” This video racked up almost 380,000 views in one month. The 22 videos posted from 9 to 13 June on this account, named migrantvlog, showed these men thanking Labour for “free” buffets, feeling “blessed” after being given £2,000 e-bikes for Deliveroo deliveries and burning the union flag.

Even if a man’s arm didn’t disappear midway through a video or a plate vanish into thin air, I could tell the content was AI-generated because of the blurred background and strange, simulation-like characters. But could the thousands of other people watching? Unfortunately, it seemed not many of them could. Racist and anti-immigration posts dominated the comment section.

I worry about this blurring of fact and fiction, and I see this unchecked capability of AI as incredibly dangerous. The Online Safety Act focuses on state-sponsored disinformation. But what happens when ordinary people spread videos like wildfire, believing them to be true? Last summer’s riots were fuelled by inflammatory AI visuals, with only sources such as Full Fact working to cut through the noise. I fear for less media-literate people who succumb to AI-generated falsehoods, and the heat this adds to the pan.


AI can help tell great stories – but who controls the narrative?

Rukanah mogra

Rukanah mogra

Leicester-based journalist working in sports media and digital communications with Harborough Town FC

The first time I dared use AI in my work, it was to help with a match report. I was on a tight deadline, tired, and my opening paragraph wasn’t working. I fed some notes into an AI tool, and surprisingly it suggested a headline and intro that actually clicked. It saved me time and got me unstuck – a relief when the clock was ticking.

But AI isn’t a magic wand. It can clean up clunky sentences and help cut down wordiness but it can’t chase sources, capture atmosphere or know when a story needs to shift direction. Those instinctive calls are still up to me.

What’s made AI especially useful is that it feels like a judgment-free editor. As a young freelance journalist, I don’t always have access to regular editorial support. Sharing an early draft with a real-life editor can feel exposing, especially when you’re still finding your voice. But ChatGPT doesn’t judge. It lets me experiment, refine awkward phrasing and build confidence before I hit send.

That said, I’m cautious. In journalism it’s easy to lean on tools that promise speed. But if AI starts shaping how stories are told – or worse, which stories are told – we risk losing the creativity, challenge and friction that make reporting meaningful. For now AI is an assistant. But it’s still up to us to set the direction.

Author’s note: I wrote the initial draft for the above piece myself, drawing on real experiences and my personal views. Then I used ChatGPT to help tighten the flow, suggest clearer phrasing and polish the style. I prompted the AI with requests such as: “Rewrite this in a natural, eloquent Guardian-style voice.” While AI gave me useful suggestions and saved time, the core ideas, voice and structure remain mine.


Does our environment pay the price of AI?

Frances Briggs

Frances Briggs

Manchester-based science website editor

AI is powerful. It’s an impressive technological advancement and I’d be burying my head in the sand if I believed otherwise. But I’m worried. I’m worried my job won’t exist in five years and I’m worried about its environmental impact.

Attempting to understand the actual impact of AI is difficult; the key players are keeping their statistics close to their chests. What I can see is that things are pretty bad. A recent research paper has spat out some ugly numbers. (It joins other papers that reveal a similar story.) The team considered just one case study: OpenAI’s ChatGPT-4o model. Its annual energy consumption is about the same as that of 35,000 residential households. That’s approximately 450,000 KWh-1. Or 325 universities. Or 50 US inpatient hospitals.

That’s not all. There’s also the cooling of these supercomputer’s super-processors. Social media is swarming with terrifying numbers about the data-processing centres that power AI, and they’re not far off. It takes approximately 2,500 Olympic-sized swimming pools of water to cool ChatGPT-4o’s processing units, according to the latest estimates.

AI agents such as the free products Perplexity or Claude don’t actually seem to be consuming that much electricity. At most, the total global energy consumed yearly by AI is still less than 1%. But at the same time, data-processing centres in Ireland consumed 22% of the total electricity used by the whole country last year, more than urban housing. For context, there are 80 data-processing centres in Ireland. At present, there are more than 6,000 data-processing centres in the US alone. With the almost exponential uptake in AI since 2018, these numbers are likely to be completely different within a year.

In spite of all these scary statistics, I have to hope that things are not as worrying as they seem. Researchers are already working to meet demands as they explore more effective, economic processing units using nanoscale materials and more. And when you compare the first language-learning models from seven years ago to those created today, they have iterated well beyond their previous inefficiencies. Energy-hungry processing centres will get less greedy – experts are just trying to figure out how.


If AI is the matchmaker, will I know who I’m dating?

Suggestion Maheswaran

Suggestion Maheswaran

London-based student who pursues journalism alongside her studies

“You need to get out there, meet lots of people, and date, date, date!” is the cliche I hear most often when speaking to people about being in my 20s. After a few questionable dates and lots of juicy gossip sessions with friends, a new fear emerged. What if they’re using AI to message me?

Overly formal responses, or conversation starters that sounded just a bit too perfect, were what first made me question messages I’d received. I am not completely against AI, and don’t think opposing it entirely is going to stop its development. But I do fear for our ability to make genuine connections with people.

Pre-existing insecurities about how you speak, write or present yourself make a generation with AI to hand an easy prey. It may begin with a simple prompt, asking ChatGPT to make a message sound more friendly, but it can also grow into a menacing relationship in which you become reliant on the technology and lose confidence in your own voice. The 2025 iteration of the annual Match.com Singles in America study, produced in collaboration with the Kinsey Institute at Indiana University, found that one in four singles in the US have used AI in dating.

Perhaps I am over cynical. But to those who are not so sure of how their personalities are coming across when dating or how they may be perceived in a message, they should have faith that if it is meant to be it will be – and if AI has a little too much say in how you communicate, you may just lose yourself.


I can see humans and AI learning together

Iman Khan

Iman Khan

Final-year student at the University of Cambridge, specialising in social anthropology

The advancement of AI in education has made me question the idea of any claimed impartiality or neutrality of knowledge. The age of AI brings with it the need to scrutinise any information that comes our way.

This is truer than ever in our universities, where teaching and learning are increasingly assisted by AI. We cannot now isolate AI from education, but we must be ready to scrutinise the mechanisms and narratives that underpin the technology itself and shape its use.

One of my first encounters with AI in education was a request to ChatGPT to suggest reading resources for my course. I had assumed that the tool would play the role of an advanced search engine. But I quickly saw how ChatGPT’s tendency to hallucinate – to present false or misleading information as fact – makes it both a producer and disseminator of information, true or false.

I originally saw this as only a small barrier to the great possibilities of AI, not least because I knew it would improve over time. However, it has also become increasingly clear to me that ChatGPT, Gemini and other AI chatbots contribute to the spread of false information.

AI has rendered the relationship between humans and technology precarious. There is research to be done on the potential implications of AI for all the social sciences. We need to investigate how it is integrated into how we learn and how we live. I’d like to be involved in researching how we adapt to AI’s role as not only a tool but as an active and contributing participant in society.

Nimrah Tariq

London-based graduate specialising in architecture

In my first years at university, we were discouraged from utilising AI for our architecture essays and models, only using it to proofread our work. However, in my final year, it was introduced a lot more into our process for rendering and enhancing design work.

Our studio tutor gave us a mini-seminar on how to create AI prompts so that we could have detailed descriptions to put into architectural websites such as Visoid. This allowed us to put any models or drawings that we created into an AI prompt, asking it to create a concept design that suited our proposal. It gave my original ideas more complexity and a wide range of designs to play around with. While this was useful during the conceptual phase of our work, if the prompts were not accurate the AI would fail to deliver, so we learned how to be more strategic. I specifically used it after rendering my work as a final touch to create seamless final images.

During my first and second year, AI didn’t have as much impact on the design process of my work; I mainly used existing buildings for design inspiration. However, AI introduced new forms of innovation, which accelerated the speed with which we can push the boundaries of our work. It also made the creative process more experimental, opening up a new way of designing and visualising.

Now I have finished my degree, I’m intrigued to see how much more architecture can grow through using AI. Initially, I believed AI wasn’t the most creative way to design; now, I see it as a tool to improve our designs. It cannot replace human creativity, but it can enhance it.

Architectural practices always ask job applicants for skills in software that uses AI, and you can already see how it is being incorporated in designs and projects. It has always been important to keep up to date with the latest technological advancements in architecture – and AI has reaffirmed this.


Source link

You may also like

Leave a Comment

Este site usa cookies para melhorar a sua experiência. Presumimos que você concorda com isso, mas você pode optar por não participar se desejar Aceitar Leia Mais

Privacy & Cookies Policy

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.