What Are We Looking At When We Look at AI Art?
Written by Zelalem Gizachew
Tales and Dreams, an exhibition of AI-generated works by Mekbib Tadesse, opens this weekend at Artawi Gallery. This is the thinking behind the curation.
When I first encountered AI-generated images, the feeling wasn't confusion. It was release.
Not because something entirely new had appeared, but because something that was always there— quietly—had become visible. The mechanics of how images come into being. The structure behind what we call creativity.
As curators, our role was not to settle the question of whether this is art. That conversation is already saturated, and often shallow. We were interested in something quieter but more unsettling: to create a space where people can confront what these images reveal about how images—any images—come into being.
The intention was simple. To shift the conversation from judgment to observation. From "Is this real?" to "What am I actually seeing?"
Because what is at stake here is not just a new tool. It is a new visibility into processes that were always hidden.
Over time, I found myself less interested in the question everyone asks—is this art or not?—and more interested in a different one: what does this reveal about how we create, as humans?
To think about this, I kept returning to the work of Margaret Boden, a British cognitive scientist whose work sits at the intersection of artificial intelligence, philosophy, and psychology. She spent decades studying creativity not as inspiration, but as a system—something that can be understood, broken down, and even modeled.
Her argument is both simple and disruptive: creativity is structured. Not only structured—but structured in ways we can describe.
She identifies three forms, though in practice they bleed into each other.
The first is combinational creativity.
This is the most familiar. It is what happens when we take existing ideas and bring them together in new ways.
A photographer blending traditional portraiture with contemporary fashion
A painter combining religious iconography with modern abstraction
Even cultural identity itself—formed through layers of influence, memory, and exposure
AI systems do this constantly. They recombine styles, forms, and references at a scale we cannot.
The second is exploratory creativity.
Here, the creator is not mixing ideas, but working within a defined space—pushing its boundaries, testing its limits.
A musician working within a specific scale but finding new expressions
A filmmaker exploring variations within a genre
A visual artist working through permutations of light, texture, or composition
This is where AI operates most precisely. A model learns a "space" of possibilities—say, portraits, or landscapes—and then explores it, generating variations that feel new but remain consistent with the underlying structure.
The third is transformational creativity.
This is the rarest form. It does not just explore or combine—it changes the rules themselves.
The shift from figurative painting to abstraction
The invention of perspective in Renaissance art
The emergence of entirely new visual languages
This is where both humans and machines struggle. It requires stepping outside the system, not just working within it.
When you look at AI-generated images, you are mostly seeing the first two at work. Not imagination in the romantic sense, but combination and exploration—executed with precision and speed.
These systems are trained on vast numbers of images. Not to understand them the way we do, but to learn patterns—how shapes relate, how colors cluster, how forms tend to appear together. When they generate an image, they are not expressing an idea. They are navigating a landscape of possibilities they have learned.
At a technical level, many of these systems begin with noise—literally randomness. Imagine a canvas filled with static, like an untuned television: no shapes, no meaning, just scattered pixels. The model has been trained beforehand by taking real images and gradually corrupting them with noise—step by step—until the original image disappears. Through this process, it learns something precise: at each stage of corruption, what the underlying structure likely was.
Generation is the reverse of that process.
Starting from pure noise, the model predicts, at each step, what part of that noise does not belong. It subtracts small amounts of randomness and replaces them with structure—guided by probabilities it learned during training. If you prompt it with "a woman in white surrounded by doves, " that text is converted into numerical signals that steer the process, nudging the image toward certain shapes, textures, and arrangements.
This happens iteratively:
The first steps are vague—only large, blurry forms begin to emerge
Then rough composition appears—placement of figures, light vs. dark
Then finer details—edges, textures, facial features
Until the image stabilizes into something recognizable
At no point does the model "see" the full image the way we do. It is constantly making local predictions—pixel by pixel, region by region—about what is statistically consistent with both the noise it started from and the prompt it was given.
In simple terms:
Training teaches the model how images break apart
Generation uses that knowledge to reconstruct how images come together What looks like creation is, technically, controlled reconstruction.
But not all prompts are concrete.
If a user writes, "a woman surrounded by doves, " the model has clear anchors—objects, forms compositions it has seen many times. It can map words to visual patterns with relative precision.
But what happens when the prompt is something like: "The loss of home and the illusion of change of times"?
There is no direct image for that.
So the model does something subtle. It breaks the sentence into fragments it can work with—not by understanding meaning, but by mapping associations learned from images and text:
"home" might pull toward interiors, houses, warmth, familiarity
"loss" might correlate with emptiness, distance, absence, decay
"illusion" might lean toward distortion, blur, surreal compositions
"change of times" might connect to aging, contrast, transition, historical textures
These are not interpretations. They are statistical tendencies.
The model then forms a kind of weighted field—some elements pulling toward structure, others toward atmosphere. The prompt becomes a tension between these forces rather than a single clear instruction.
From there, the same process unfolds:
Starting from noise, vague spatial forms begin to appear
A space may emerge, but incomplete or distorted
A figure may form, but partially obscured or fragmented
Light, texture, and contrast begin to carry more meaning than objects
Because the prompt is abstract, the model leans less on recognizable forms and more on mood and composition.
This is why such images feel symbolic.
Not because the machine understands the idea, but because it assembles visual proxies for its components—layered together from patterns it has seen before.
What feels like emotion or depth in the image is not coming from intention. It is emerging from the density of these overlapping associations.
And yet, when we look at it, we complete the process. We read into it. We connect it to memory. We assign meaning.
The machine constructs the surface.
We construct the significance.
It sounds mechanical, but it is not entirely foreign.
We do something similar, though less visibly. We don't create from nothing. We draw from memory, from things we have seen, from fragments we carry. We adjust, refine, correct. We move from vague impressions toward something more defined. What we call intuition is often this process happening quietly, without us naming it.
The difference is that in us, the process is buried. In the machine, it is exposed. When we watch an AI image resolve from noise into form, we are watching something that usually stays hidden—the slow accumulation of structure out of possibility. It is not a perfect mirror of human creativity, but it is close enough to be uncomfortable.
What AI does is make that process explicit. It shows that images are not only expressions. They are also constructions. Built from relationships, from constraints, from what is available.
This does not make human creativity less meaningful. But it does make it less mystical.
And maybe that is where the discomfort comes from. Because if a machine can recombine, explore, and produce something that feels intentional, then we are forced to ask what part of creativity is truly ours. Is it the idea? The process? The selection? The meaning we attach afterward?
Standing in this exhibition, you are not just looking at images. You are looking at a system that mirrors something back to you.
And that mirror lands differently in a place like ours.
In Addis, identity itself often feels like a composition—of histories, influences, contradictions, and unfinished narratives. We live inside layered inheritances: languages pulled from different regions, aesthetics shaped by faith and migration and trade, narratives that were written, rewritten, and contested across generations. If all of that were reduced into patterns, into a space of possibilities, what would emerge? What combinations would repeat? What would be lost in the averaging? What would refuse to be captured at all?
AI does not answer these questions. It exposes them.
So rather than asking whether this is art, it might be more useful to sit with a different question:
If creativity can be broken down into patterns, combinations, and explorations— what, then, do we recognize as uniquely human?
Tales and Dreams is open at Artawi Gallery from April 25 to May 17, 10am–7pm daily.
KKARE Homes, 2nd Floor, Bole.

