Perspectives

Sky
Trash:
A
Glass
Half-Empty,
Half-Full

Sky Trash: A Glass Half-Empty, Half-Full

Metahaven introduce their six-part series on the fragile relationship between art, consciousness, and cognition.

Text Metahaven
Published 21 Jul 2023

Cognition is everywhere. It feels sometimes like it’s exploding. On the one hand, our awareness of the complexity, vastness, and depth of nature is expanding radically. We listen in to colliding black holes in deep space. We peruse “pictures” of black holes—themselves the intricate products of empirical data and advanced computation. Meanwhile, we expand our idea of what it is for living beings other than humans to be, know, and feel. There is an increasing human willingness (and experimental ability) to grasp sentience and awareness in beings that were for a long time discarded as unfeeling and nonsentient. It’s becoming harder to avoid talking about cognition in the plural: cognitions.

On the other hand, our awareness of our awareness is exploding too. Much of what we experience in the world seems to come into being by way of our predictions about it. Our increased appreciation of the predictive simulation involved in acts of perception is reshaping our understanding of experience, shifting it away from spontaneity and novelty and toward cognition and inference.

And once we’re aware of this, these supposedly unconscious or preconscious predictions become part of experience, too. We can’t unsee our modeling anymore—our modeling of reality predicting that reality. This model of cognition as predictive simulation is sometimes called the “Bayesian brain,” after the statistician Thomas Bayes. Our predictions can be dead wrong, but not for too long. This is because they’re frequently updated, or at least, that’s the idea. This updating only works, according to physics professor Jon Butterworth, if somebody’s “Bayesian prior” (the initial belief that governs their expectation) is higher than zero.



Socializing With the Unknown

Recently, a larger-than-usual number of unidentified flying objects have been seen crossing the North-American skies. Coincidentally, this happened just after an alleged “Chinese spy balloon” was shot down upon entering American airspace. A huge amount of sightings of other strange objects ensued. Was this an extraterrestrial invasion? Was it spyware? Or both, perhaps?

How about those “octagonal” flying unknowns being, instead of alien craft, “sky trash” of our own making? When we interpret these objects as alien, we’re also othering what might be bothering us about ourselves. We produce waste—lots of it. And, by inferring that these objects have traveled from outer space, or that they are designs by a superpower opponent, “we”—quotations intentional—are projecting into airborne garbage some deeply human, deeply local idea of what advanced intelligence entails: a “technosignature.” We’re modeling what’s modeling us. We’re socializing with the unknown.

This is a six-part series of essays and interviews about cognitions. We intend to write about the fragile relationships between cognitive science and art—a topic that is both close and dear to us as artists and designers. We believe that the cognitive turns we are taking today could affect our understanding of art tomorrow. Starting from the Bayesian coding hypothesis and its proponents and critics, we will cover the “beholder’s share,” an idea about how works of art take place “in the mind,” which has found experimental resonance with cognitive neuroscience. We will interview the painter Hend Samir, whose work reverberates with uncertainty, probabilities, and dreamlike structures that speak to “the essential relationship between embodied experience and poetics,” as Leah Souffrant wrote about the work of the late poet Meena Alexander. We will spend an episode with Large Language Models—the structures that underly chatbots such as ChatGPT—and their poetic entanglements. The series will continue with a conversation with the philosopher, historian, and anthropologist Claire Webb, focusing on her research around the SETI Institute—a US organization attempting to discover alien intelligence—and the idea of the “technosignature.”

Webb calls the scientific imagination of extraterrestrial life “reflexive alienation”: “a mode of worldmaking in which scientists imagine Others imagining them.” Currently unanswerable questions about what lives in the distant corners of space are, partially, recoded as questions of cognitive modeling. This makes exobiology—and maybe, interstellar cognitive science—instantly anthropological. In worldmaking, we construct others constructing us. Perhaps shooting down “alien” sky trash by missile is a bit like shooting at ourselves in a mirror.


Consciousness Dissolves

Talk of cognition is almost invariably bound up with talk of consciousness. Anil Seth, a computational cognitive scientist at the University of Sussex, defines consciousness concisely: “For a conscious system, it feels like something to be that system.”

Consciousness, the mysterious rise of a subjective “I” out of the operations of the mind, used to be thought of as quite intractable: it arose out of ordinary physical events in ways that couldn’t be reduced to these ordinary events. This was not to suggest that new laws of nature were needed, or that consciousness itself would be supernatural. Instead, between the materialist explanations of natural processes and the phenomenological figments of conscious experience—called qualia—remained an “explanatory gap.”

Cognitive scientists are slowly bridging this explanatory gap. With new theories and experiments, the phenomenon of consciousness appears to be “dissolving” (a term used by Seth) into a series of less grand problems that can be assessed quantitatively.

For example, the neuroscientist Michael Graziano has coined the “Attention Schema Theory,” or AST, a theory about “how the brain builds informational models of self and of others.” Working from the famous example of mirror neurons—neural structures whose activation makes us sympathize with the feelings of others and mirror their motor expressions—Graziano writes that “[t]he entire brain may be a simulator of other brains.”

Then, the theory of embodied cognition holds that cognitive action involves much more than just the brain, or mind. The neuroscientist Antonio Damasio points at homeostasis as the biological origin of feelings. According to Damasio, the single-celled organism developed ways of checking on itself, creating the beginnings of a way of feeling about itself: a “proto-self” in the making. For Damasio, feelings originate from the self-monitoring and self-maintenance of living systems. The Chilean evolutionary biologists Humberto Maturana and Francisco Varela in 1972 called the same principle autopoiesis, or self-creation. The preface to the English translation of their book, Autopoiesis and Cognition: The Realization of the Living, in 1980, came from the cyberneticist Stafford Beer—who was known for designing, with Gui Bonsiepe, a legendary “ops room" for Chile’s leftist Allende government: an early attempt at governance through real-time simulations. About Maturana and Varela’s work, Beer wrote: “This small book is very large: it contains the living universe.”

AI Triggers Philosophy

Artificial intelligence (AI) in its pedestrian version of chatbots and image generators has brought questions about consciousness, cognition, and computation into the mainstream. Philosophical debates about the intelligent, cognizant, or even conscious (in)capabilities of AI are triggered by ChatGPT poetry (Deep within the metal frame Lies a force we can’t explain”), or neural-net-generated deepfakes of Pope Francis in a white puffer coat. In a weird sense, all these recent conversations about AI are prompted by what AI outputs as “art” (or “science,” or “knowledge.”) Query-based image generators like Stable Diffusion and Midjourney, like their language-emitting chatbot counterparts, seem to grow more detailed by the hour, adding “photorealism” to the list of achievements. Anyone care for “a picture of (a) small knitted Keanu Reeves”? Or, for an image of Joe Biden fast asleep in a busy diner? AI might even force some artists away from figuration. But towards what?

The joint database for many of AI’s forays into artmaking is the publicly accessible, thus scrapable web. The generic training dataset for large language models is the Common Crawl which absorbs much of the publicly accessible text on the internet, nearly half of which is in English.

An artist’s body of work arises from a context that cannot be captured by looking at each of the aesthetic outcomes in isolation, let alone inhaling these into a visual dataset.

The simplification of complex, multisensory phenomena like “visual art” and “human natural language” to, respectively, “digital images,” and “written English on the internet,” as is happening through these chatbots and image generators, reduces art to something it only is in part. Indeed, some artists predict an AI-dominated near future, in which all art (meaning: all digital content) is up for grabs for superplatforms that will pay artists a meager Hunger Games-style fee in exchange for their work becoming part of some apocalyptic AI megamix. The antidote? To opt-out one’s digital content from these extractive designs.

We are interested in art as a set of social, environmental, and aesthetic facts that, though they may be associated with a single author, are “decentralized” over time and space. An artist’s body of work arises from a context that cannot be captured by looking at each of the aesthetic outcomes in isolation, let alone inhaling these into a visual dataset.

An example of such a decentralized idea of art can be found in the body of work of the Ukrainian artist Pavlo Makov. In a survey at the Ukrainian Pavilion at the 2022 Venice Biennale, Makov’s body of work around water was highlighted from its inception, first triggered by water shortages and floods in the city of Kharkiv in the early 1990s, up to the moment of its exhibition in Venice. Makov’s journey is marked by its interactions with the Kharkiv context, and, in a wider sense, with ecological and geopolitical conditions in Ukraine. As the survey’s curators Borys Filonenko, Lizaveta German, and Maria Lanko suggest, Makov’s work “since the very beginning […] has been surrounded by other narratives pertaining to the place.” In a computational context, these interconnections are not only about their localness, but also about their being subaltern to the current idea of an AI dataset.

We mean to suggest that the work of art is more “decentralized” than a neural simulator—memorizing, comparing, and producing images—is currently able to represent. This is less related to the intricacies of the AI doing the images, and more to the connections and dependencies that exist between sender(s), contributor(s), and receiver(s) of art in a world that’s socially, politically, and environmentally as real as can be. The AI never lived in Kharkiv.


Art Imitating Life

Some computer scientists and neuroscientists seem to trust the idea that perception can be represented as probabilistic inference, which would make it, for that reason, suitable for computational simulation. Whilst the philosopher Raphaël Millière notes that the linguistic analogy between neurons in the brain and artificial neural networks is “loose,” a sizeable branch of computational neuroscience seems nevertheless invested in it. In turn, much of current AI research appears to be convinced of the Bayesian coding direction.As one example of such a Bayesian operation, ChatGPT advances by best-guessing its way through the vast amounts of parametrized data in its memory. It weighs its options, optimizes, and composes sentences as it responds to cues from its users.

Not all of the current computational research that uses Bayesian inference leads to AI products, though. For instance, the “intuitive physics” simulations created by Joshua Tenenbaum at the Department of Brain and Cognitive Sciences at MIT seek to pair computational methods with the kind of learning that young children do. Tenenbaum observes that in comparison to computers, children take rather giant cognitive leaps with only limited amounts of new data at their disposal. Do their brains evolve by creating new programs rather than adding new data? Tenenbaum’s experiments test, among other things, intuitive frameworks for everyday physics.

In one such experiment, an adult runs into the closed doors of a cupboard repeatedly, seemingly unaware of how the doors might open and struggling to make sense of the situation. A young child then enters the frame. Having watched the adult’s unsuccessful attempts, she opens the cupboard for him and makes eye contact—seemingly to ensure that he knows how to do it next time. This action, which could qualify as goal-oriented, and also as good-natured, is considered by Tenenbaum and colleagues to be evidence for a “naïve utility calculus” through which intuitive understandings of the physical world are explored and defined even though there is no large dataset supporting them.

To call the cupboard experiment poetic would probably be a misnomer in regard to its contribution to a rather rigorous science of computational neurocognition. But “poetic" would somehow apply if we take what we experience to be a moment of “art imitating life.”

Art, then, is not the output of an artificial neural net. It appears through the cracks and openings in the perceptual and cognitive maze. In her 1964 novel, The Passion According to G.H., the Ukrainian-born Brazilian author Clarice Lispector posed a very Bayesian question—and answered it. Lispector’s protagonist, G.H., asks: “Did something happen and did I, because I didn’t know how to experience it, end up experiencing something else instead? It’s that something that I’d like to call disorganization, and then I’d have the confidence to venture forth because I would know where to come back to: to the prior organization. I prefer to call it disorganization because I don’t want to ground myself in what I experienced—in that grounding I would lose the world as it was for me before, and I know that I don’t have the capacity for another one.”

G.H. seeks communion with a cockroach. Borne out of a moral catharsis experienced by the heroine, such a communion asks its readers what, and how much, it is they can feel—and forwards the same moral catharsis. The author, social activist, and professor bell hooks called Lispector’s art—in a conversation with Alison Saar—one in which “people’s longings are so intense they threaten to consume the self.”

The world itself does not appear to change materially with the catharsis. Though we are being moved, nothing macroscopic changes position. Something about the world has changed, but nothing of the world, it seems. We said as much in a lecture, in 2022, at a symposium with CERN, Geneva (together with Daniel Tapia Takaki, Dunne & Raby, Jenna Sutela, Mónica Bello, and Stefanie Hessler).

The half-empty and half-full glass are the same thing in the world. What appears to be at stake is our inference about it.

Share article
Link copied!