The emergence of models like those from OpenAI has democratized the artificial creation of texts, images, and videos in a matter of seconds. While this greatly simplifies the process, it also makes it more difficult to distinguish what is real from what is not. Therefore, it’s important to learn to identify clear signals and understand when that difference truly matters.

How to detect if a text is generated with AI

Distinguishing artificial text from human text is not always obvious. However, generative AI is not yet advanced enough for certain features to disappear:

Common linguistic patterns: excessive order, perfect transitions, neutral tone

A text generated by AI usually sounds flawless; that is, everything flows more or less seamlessly, as if each sentence were precisely placed. Who can achieve something like that on the first try? Despite this, the transitions feel awkward and the tone doesn’t lean in any particular direction. Although the text is technically correct, it conveys no emotion.

Repetition of ideas and overly symmetrical structure

Another typical trait is repetition. AI repeats concepts with slight variations and constructs paragraphs of almost identical length. This uniform rhythm conveys clarity, but it also reveals the absence of a personal voice. The overuse of vocative commas or gerunds is also a clear sign of AI.

Lack of human error: no typos, no colloquial expressions

Mistakes make us human. Although undesirable, handwriting means we also tend to overlook minor writing errors. Furthermore, AI tends to avoid regional idioms or words we use without thinking. The result seems correct, but not quite right.

Limitations of automatic detectors (and why they fail)

Today, detectors exist that can identify the degree of AI in a text, but they don’t yet work perfectly. The lack of development means they can mistake highly polished texts written by humans or fail to detect AI effectively in hybrid paragraphs.

On the other hand, there is so much variety of AI with different styles that these systems do not achieve the accuracy they advertise.

How to detect if an image or photo is AI-generated

The generation of artificial images is advancing by leaps and bounds, yet there are still details that give it away:

Signs in the details: hands, reflections, shadows, and textures

To find out if an image was created by AI, you have to look at:

  • The hands often show strange fingers or impossible proportions.
  • The reflections don’t quite fit either: a shop window reflects lights that don’t exist, or a pane of glass reflects an impossible angle.
  • The shadows are also poorly done: they change direction for no reason, causing the textures to blend together.

Inconsistencies in composition, anatomy, and backgrounds

AI often fails precisely where it’s most noticeable: in consistency. For example, you might see a person who looks completely real, and then, upon closer inspection, an ear appears larger or in the wrong place. Something similar happens with backgrounds. Suddenly, floating balconies appear, buildings are duplicated, or nonsensical letters disrupt the scene.

Metadata: when it helps and when it’s useless

Checking the metadata can be helpful if the image comes from a specific mobile phone or camera. However, many AI systems delete or generate empty metadata, so it may not serve as conclusive proof.

Current tools for analyzing images (and their limitations)

Platforms like Hive Moderation or AI or Not detect typical generation patterns. They work well with clearly artificial images, but fail when the photo combines real elements with AI or when the model that created it is very recent.

How to detect if a video was made with AI

A prime example of AI use in audiovisual production was the Cruzcampo ad that “resurrected” Lola Flores. It was impactful because it was so well done, even though it still showed subtle clues. Those same clues continue to help distinguish an authentic clip from a digitally generated one.

Visual artifacts: strange movements, plastic skin, unstable eyes

Although AI-generated videos appear clean, they hide clear clues:

  • The movements are too smooth.
  • The skin shines like plastic.
  • The eyes blink infrequently or look towards points that do not fit with the scene.

Audio vs. lips: desynchronization and artificial gesticulation

If the sound doesn’t match the lip movements, something’s wrong. AI tries to mimic gestures, but it still sounds artificial. In fact, if you look closely, in many generated videos the mouth movements don’t coincide with the words.

Deepfakes: specific signs to identify them

A Practical Guide to Deepfake Detection offers some tips for locating deepfakes:

  • Micro-flaws in facial expression.
  • Check for blinking and eye movement that is usually fixed on one point.
  • Analyze frame by frame.

Platforms that already incorporate generated content tags

TikTok and YouTube already include warnings for flagging videos with AI. Even so, we still depend on the creator’s willingness to declare it.

When does it matter (a lot) if something is created with AI

In journalism, politics, and sensitive content

In these areas, authorship is everything. The Global Principles for AI Journalism state that any automated content must be clearly identified to protect its accuracy and avoid confusion. A news piece created by AI without this attribution undermines the content’s credibility and poses a clear risk of misinformation.

When identities are impersonated or opinions are manipulated

Here the line becomes clear. Complutense University points out that fake news has skyrocketed with social media and digital manipulation techniques. If an image, video, or text created with AI is used to attribute false words or actions to a person, the damage is real and can spread very quickly.

In academic work or professional tests that assess human skills

The use of AI in the workplace and classroom matters. Many institutions combine plagiarism detection with policies that allow for the transparent use of AI. Furthermore, after the initial chaos that followed its introduction, the trend in universities is to integrate AI as an effective resource. In fact, AI offers many tools to streamline our work.

Conclusion: the key is not detecting AI, but learning to live with it

Ultimately, AI is already part of how we create and consume information, so obsessing over its use solves nothing. At this point, the important thing is knowing when it’s important to identify it and when it’s not worth the effort. Coexistence between humans and artificial intelligence becomes easier when we use tools with discernment and transparency.