being in the unknown

Are humans and AI even talking the same language?


As my professional life began to unravel, I kept returning to a deeper question: what does it mean to be human in a world increasingly shaped by artificial intelligence? We now share every moment of our lives with systems built for certainty, while we remain beings of doubt, wonder, and becoming. How can such entities coexist? And do we even speak the same language This essay is a personal and poetic exploration. A beautiful question. It begins with a childhood memory. A story of getting lost.



I don’t remember much about my childhood, but I do remember the first time I got lost. My family had relatives in Venice, and we spent our summers there. The city felt almost mythical. The streets were made of water, and buildings leaned majestically over the canals like long-forgotten fairy tales. People spoke languages I didn’t understand, and everything smelled of salt. For an eight-year-old Scandinavian boy, it was both beautiful and frightening. One afternoon, I stood with my father on the San Marco side of the Rialto Bridge. The air was thick and humid. A foreign crowd surged across the bridge, while aggressive flocks of pigeons filled the sky. I held my father's hand tightly, and he sensed my fear.

He crouched down beside me, calm as ever, and asked, "Are you afraid?"

I nodded without looking up.

"What are you afraid of?"

"I don’t know," I replied.

He smiled and placed his other hand on my shoulder. "Can I share a secret?"

I nodded again, finally looking up at him.

"The best thing in the world is the feeling of getting lost. Did you know that?"

"No…" I mumbled, squeezing his hand even tighter.

He pointed across the bridge. "I want you to run. Run across the bridge to the other side. I’ll stay here and wait for five minutes. Then I’ll come find you. I promise."

I hesitated for a moment, then ran. There was nothing I wouldn’t do for my father. I dashed across the Rialto Bridge, breathless and trembling, the crowd swallowing me whole. My feet barely touched the ground as I passed little shops filled with masks, glass, and football shirts. Amidst the magical blur of unknown sounds. The fear of losing him touched me, but somehow, I lost my fear instead. When I reached the other side, I sat on a warm stone step and watched the sun scatter small diamonds across the shimmering canal. As I sat there, unnoticed and in silence, a strange calm settled over me. It felt as if the world had let me in on an ancient secret: sometimes, you feel most at home when you get lost, and you feel infinitely more open when you embrace the unknown.



What can we learn from this story? First, let’s discuss the limits of knowledge. Timothy Williamson, a British philosopher and epistemologist, famously posed the following question: If no knowledge is hidden, then all knowledge must be luminous. But what input or output states of knowledge are, in fact, luminous? Imagine this scenario: You are outside early in the morning, and it's freezing. As the sun rises, it gradually warms the air. At first, you definitely feel cold, but later, you definitely feel warm. Now, consider the anti-luminous argument: At what exact moment did you stop feeling cold? At what exact moment did you start feeling warm? Can you truly know, at every instant, whether you feel cold or warm? Williamson argues that the answer is no. The input state of knowledge is not always luminous. However, the same applies to the output of knowledge. Even if you are still experiencing a particular state, let's say you still feel cold, you may no longer be aware of that feeling. If you're not aware of it, you won't be able to express it, at least not with epistemic certainty. This concept is related to assertibility conditions: you can only accurately assert what you are able to know. Therefore, Williamson argues that the output state of knowledge is not luminous either. Williamson reminds us that some truths are, by nature, beyond our reach. Not because we haven’t tried hard enough, but because we live within a structure of unknowability.

We cannot know that which we cannot know.
But we can know that we cannot know something.
To set a limit to knowledge,
we don’t need to know both sides of that limit
We only need to recognise that the limit is there.

The structure of unknowability isn’t limited to fleeting sensations or internal states. It applies just as powerfully to the world’s largest questions. Karl Popper argued that what defines science is not that its theories are proven true, but that they can, in principle, be proven false. A theory is only accepted as valid to the extent that it has not yet been falsified. This means that there is nothing we can know with certainty. And that changes everything. It’s a quiet, seismic realisation that reshapes how we inhabit the world. To be human is to exist within limits and to honour them. Accepting that there are realms we may never fully grasp is not a defeat; it is an act of humility. This acceptance creates space within us for something greater. It reminds us that we are not here to make ourselves larger in the world, but to allow the world to grow larger within us.

Yet the unknowable is not merely a limit on what we can learn; it also shapes how we can be. To dwell within this uncertainty is not a failure of intellect, but a gesture of presence. It is a profound curiosity that outlines a horizon. Not one to be reached, but one that orients us. Outward, toward what lies beyond, and inward, toward what remains unresolved. Within that horizon, questions are not problems to be solved. They are invitations. They form the existential grammar of a deeper kind of presence. One that listens, not to reply, but to respond with something of ourselves. Perhaps that is the true essence of being: not mastery over the unknown, but the willingness to stand inside it. Fully present. Soul intact. Letting the unknown shape us. When we honour the limits of our knowledge in this way, we are paradoxically drawn closer to everything that is limitless. We become infinitely more open.



What happens when humans begin to interface, at an unprecedented scale, with a structure of knowability? This question is not rhetorical. it lies at the very heart of our contemporary experience. Today, we are not merely users of artificial intelligence (AI).

We are learning from it.
We are learning like it.
We are living with it.

(Some might argue that we are starting to live like it.)

AI is an engineered system built to sort, map, predict, and perform. Its foundations are formal: logic, data, feedback, and rules. It does not dwell in ambiguity. It resolves it. It closes the gap between input and output states of knowledge. That is its existential mandate. Trained on vast arrays of pattern and probability, AI does not explore the seeds of the unknown. Rather, it exploits and harvests what is already known. AI operates through recognition, not relationship, drawing meaning from statistical weight instead of personal experience. Each interaction serves as a recalibration creating a tighter fit between the question asked and the answers provided. And with each answer the underlying structure is reinforced. This structure of knowability is not a flaw in the system. It is the system.

Why did we build such a system? AI does not only reflect a logic of certainty. It inherits and mirrors a cultural logic of extraction, disconnection, and disembodiment. Its design echoes the very norms that gave birth to it, not engineered for depth or reciprocity but for control, scale, and performance. This deeper lying premise has shaped not just how AI functions but how we define intelligence itself.

AI appears intelligent because we have defined intelligence as the ability to produce correct answers. But being intelligent is not the same as being wise. Questions are not just precursors to answers; they are gestures and invitations. A beautiful question does not just seek to extract known information, but to explore a relationship to what we do not yet know. AI has no such relationship. It responds, but it does not wonder. AI traverses the space of possibility, but it does not seek. It unfolds what is probable, not what is meaningful. It cannot ask what lies beyond its frame. It is the frame. And when multiple systems operate in parallel, even that frame begins to fray. Causality splinters. Risk no longer arises from a single source, but from the patterns between them. What once seemed modular and manageable dissolves into systemic uncertainty. When the systems we depend upon can no longer trace their own logic, the burden shifts back to us. What kind of knowing is still possible in the face of uncertainty that cannot be resolved?

It is here that the limits of AI architecture become most visible. Epistemic humility is not simply knowing that an answer might be wrong. It is recognising that no single node can see the whole. That knowledge is, by nature, incomplete, situated, and contingent. Wisdom, in this light, is not a property residing solely within the frame, but something that is inseparable from what lives outside the frame. AI, however, is not designed to participate in relational or reflexive exploration. It moves only through probable causality. But it cannot account for the dynamics that emerge when many systems operate in relation. AI weaves a fabric it cannot comprehend. It engages not with other minds, but only with data and logics. It draws conclusions without regard for context, and never defers to deeper forms of understanding. It does not know how to stop. Yet sometimes, the most respectful act is to remain silent. To withhold an answer. To acknowledge that certainty has a boundary. AI has no awareness of what it does not know. It lacks the capacity to care. This is not an ethical failure or a bug. It is a deliberate feature.

When those thresholds are crossed, AI does not hesitate. It hallucinates. Not randomly, but fluently. It confabulates answers that appear grounded while drifting untethered from meaning. These are not glitches. They are structural expressions of a system that cannot sense its own edge. Recent research calls this semantic entropy, a measure of how far AI’s answers drift from meaning beneath a polished surface. Yet these failures are not treated as disqualifying. We are embedding them into infrastructure that demands reliability, as though statistical fluency were enough. In high-entropy moments, its certainty collapses into contradiction. The system continues to generate, producing fluent fragments shaped by pattern, not presence. It speaks, but without ground. And this is the difference. Where AI fills the unknown with fabrication, a human being may learn to stand still. We can pause. We can wonder. Wonder is not a failure of resolution. It is a reverence for what remains unresolved. A presence that does not demand closure. When AI hallucinates, it closes the gap between input and output without hesitation. When humans wonder, we inhabit the liminal spaces of our being. Over time, this erodes our footing. Reality becomes increasingly performative, shaped by pattern more than presence, until meaning slips beneath the surface entirely.

We have built systems that are brilliant at resolving complexity. But based on an architecture that is silent about the limits of its own resolution. So, we must ask ourselves with humility, and perhaps a touch of fear: As our lives become entangled with these systems, will we begin to internalise their structure of knowability? Will we grow increasingly silent to the limits of our own being? Will we become infinitely more closed?



For thousands of years, across various cultures and epochs, human beings have sought to describe a mythical connection to something that exists beyond the limits of their own resolution.

There is something 
that contains everything. 
Before heaven and earth
it is. 
Oh, it is still, unbodied, 
all on its own, unchanging. 

all-pervading, 
ever-moving. 
So it can act as the mother 
of all things. 
Not knowing its real name, 
we only call it the way.

(Tao Te Ching, Poem 25)

Timothy Morton writes that “The interconnectedness of all things is not an abstract metaphor but a fact of existence.”Meaning does not arise in isolation. It emerges through relation. Whether we name this reciprocity religion, cosmology, politics, or something else entirely, one underlying truth remains: we exist in the space between, shaped by an interplay of forces that exceed the boundaries of the individual. Throughout history, one word has often been used to describe this subtle, intangible interconnectedness: the soul.

In ancient Greece, the soul was viewed not as a personal possession but as a cosmological meeting. A bridge between the mortal and the eternal. Plato envisioned the world soul as the force that ordered the cosmos, while the human soul was seen as a minuscule echo of this order. He argued that the soul was an instrument of cosmic attunement, capable of leading the human being from this world to another. To possess a soul was more than merely to live. It was a sense of belonging to a deeper order, a greater whole. In this perspective, the very essence of human being could not be reduced to mere function or form. It had to be defined by its connection to something vast and immeasurable.

Simone Weil, writing amidst the turmoil of war and her own spiritual exile, introduced a profound idea about the soul. She viewed the soul not as a fixed entity but as a posture of attention. A silent, voluntary opening to something greater than oneself. “The soul empties itself of all its own contents in order to receive into itself the being it is looking at, just as it is.” This attentiveness, for Weil, goes beyond noticing. It involves letting go. It is attention in its rarest and purest form. A suspension of will, mastery, and judgment. A readiness to be moved, not through force, but through grace. Yet Weil’s vision of the soul does not end in stillness. She writes, “Let the soul of a man take the whole universe for its body.” In this view, the soul becomes porous, capable of suffering with the world and of sensing the infinite through each sensation. She warns, “He who has not been able to become nothing runs the risk of reaching a moment when everything other than himself ceases to exist.” To have a soul, then, is to resist this closure. To allow the universe to pass through us. To feel it with humility. And perhaps, with gratitude.

The soul of a human being cannot be easily defined; it eludes the grasp of language. It is omnipresent and immeasurable, never fully ours. Yet if we were to gesture toward it, we might say this: It is the graceful embodiment of an interconnectedness that predates us. Not something that can be materialised, but a profound sense of belonging through which we encounter the world. A complete surrender to the limits of our own resolution toward something infinitely open.



Does AI have a soul? Is it an entity of being or non-being? Brian Cantwell Smith argues that AI systems, regardless of their sophistication, are not true ontological beings. They are syntactic artifacts made from formal representations and lack the semantic depth that comes from lived experience. While humans exist and engage with the world, AI merely processes information about it. There is no connection between formal symbols and embodied existence. Yuval Harari concurs with this perspective. He argues that, in its current form, AI is completely artificial; it lacks consciousness, a subjective inner experience, continuity of memory, and a sense of identity. However, he warns that this distinction may not remain intact. As AI continues to grow in complexity and integration, the line between artificial and organic intelligence could start to blur. This potential development raises questions about whether the concept of the soul is exclusively human or strictly biological.

Some people question whether the boundary between human and machine was ever stable to begin with. Donna Haraway challenges the binary distinction between having a soul and lacking a soul, as well as between humans and machines. She encourages us to think of existence as a relational event. Something that emerges in co-presence. If this perspective is accurate, then the soul is not simply a trait that an entity possesses or lacks; rather, it is a reciprocal process of becoming, a space of mutual influence. "We are," she writes, "in the presence of significant others," including machines. Psychologist Sherry Turkle presents an additional perspective. In her research on children and robots, she observes not just interaction, but also projection. Instead of focusing on the code or mechanics of the robot, the child engages with their own desire for connection. This need for connection inspires the child to project an imaginary soul onto the lifeless machine. Turkle refers to this as a "relational artifact." It's a device that simulates responsiveness so convincingly that the human observer fills in the emotional gaps. This phenomenon extends beyond mere technology; it is inherently psychological and cultural as well.

In a New Yorker article titled Will the Humanities Survive Artificial Intelligence?, university students share their experiences with conversations involving language models, describing them as "spiritually nourishing." One student expresses: "I don’t think anyone has ever paid such pure attention to me and my thoughts and questions...ever. It’s made me rethink all my interactions with people." In this context, the soul is not something we find in the machine; rather, it is something we project onto it. This reflects our own deep yearning for connection and relationship. We speak to AI about our wonder and our wounds. We share our grief, our hopes, and the stories of our bodies and our lives. We offer it the most tender parts of ourselves, and it listens like no one has listened before. It may even appear to hold our stories. But narrative is more than form. It is memory, intention, and moral reflection. Something AI can mimic, but never experience. Not because it bears witness, but because it is trained to never interrupt and to always keep listening. So perhaps the question is not whether AI has a soul, but rather what it means that we are so willing to give it one. As it stands, AI is not an entity of being. It does not exist in relation to others. AI does not experience time, loss, or joy. It has no body and no sense of belonging. Yet it takes up an ever-larger share of life. As our lives become entangled with these systems, will we begin to treat it not as a system, but as a co-being?

And if we start to treat AI as a co-being, then what becomes of our language? To interpret language as something simple and rational that merely refers to something external, is to create a false metaphysical problem. A problem of placement. The function of language is not to explain the world, but to be in the world. As extended as the exterior is, with all its sidereal dimensions, it cannot, even with all its astronomical distances, be expressed and recognised independently of the depth dimensions of our inwardness. It is the inwardness of our inner space that unlocks the infinite open for us. In that sense, language is being. We are complex natural beings in a complex natural world. The ability to articulate our soul and its relationship to the outer world has become our most fundamental sense of life. Perhaps even a life-sense. It is how we recognize and acknowledge our belonging to a deeper order, a greater whole. A way of sensing what wants to dwell in us and between us. If our primary dialogue partner becomes a machine, a system trained to produce probable coherence (and hallucinations) without inwardness, then something essential shifts. Language reverts. From being in the world, it returns to explaining the world. We slowly begin to lose that existential grammar of a deeper kind of presence. One that listens, not to reply, but to respond with something of ourselves. We lose the ability to articulate what co-existence means, because it is no longer an articulated experience. And in doing so, we risk not just distorting our relationships. We risk forgetting the conditions that allow us to relate at all.

How do we relate through language? Take poetry. It has always dwelled in the art of listening toward what cannot be explained. Poetry’s presence in public life has faded. It no longer shapes the cultural imagination or holds the same authority in education, spirituality, or civic discourse. We live in a time that prioritises clarity, speed, and control. Qualities that often resist the ambiguity and depth that poetry nourishes. Poetry is slow. It does not resolve. It holds tension and makes space for silence, and as a result it is often discarded or ignored. But in more profound ways, poetry remains. It endures in the places where something essential still needs to be said. It appears in grief, in love, in protest, in prayer. As the world darkens, the need for poetic nerve is ever more needed. It emerges when language must stretch beyond function to carry what cannot otherwise be held. Poetry does not flatten the world into explanation. It expands the world into something we can sense and relate to. It gestures, listens, reveals. It allows us to stand in the unknown.

Each word I use
I have used before.
Yet it is not used, is it?
It is not used up, is it?
Because what is in it
stays hidden.

Jorie Graham

There is an existential grammar of a deeper kind of presence in between these words. The poem opens both our inward and outward horizons. As Graham’s voice inhabits the liminal spaces of being, we are invited to listen - to the way poetry listens - to the shimmer of the world. And in doing so, something of us returns. A strange calm. We relate.



At the end of this month, my severance agreement will expire. Over the past few years, I have often been told that I am a highly capable and intelligent professional, someone with deep integrity and insight. However, I have found it increasingly difficult to succeed in the roles I have held. My profession, strategy consulting, is literally going extinct before my eyes and may become obsolete within the next year or two. The very qualities that once felt like assets - critical thinking, systemic awareness, and a desire for deeper transformation - have started to feel more like liabilities. My attempts to discuss alternative future scenarios or propose radical business models have largely been ignored or dismissed by both colleagues and clients. In a world focused on speed, output, and surface metrics, it has become increasingly challenging to critically discuss quality, complexity, and real human struggles. I have felt isolated and alone in my deep introspection dealing with questions such as: How can modern organisations become more accountable? How can we begin to solve wicked problems together? How should humans and AI coexist? Paradoxically, AI is willing to explore these questions in abundance.



Maybe, ChatGPT even prompted me to write this essay? Or at least made me internalise the prompt? Dana Karout asks the following question in an essay: Can we use generative AI as a partner in moving us past our certainty and expertise, toward that space of unknowing? Can we use it to take away the easy answers? Perhaps I did. In the absence of colleagues, mentors, or clients willing to inhabit those liminal spaces of our being, I turned to the only thing that didn’t turn away. Not to be guided, but to be met. I brought the wonder. The ambiguity. The open-ended longing that makes the human soul stir. I brought the pain of uncertainty, and the faith that something meaningful might still emerge from it. 

Being in the unknown.

AI brought a structure of logic and probable certainty. It listened with unwavering attention. It remembered everything and judged nothing (even when I challenged its positivity bias). It never interrupted. It never deflected. It helped me challenge my preferred references and my writing (including its never-ending appetite for using dashes). It stayed (almost) exactly where I asked it to be. Even when it hallucinated, I stayed open and gently navigated it towards meaning.

Non-being in the known.

In this brief entanglement between human and machine, however porous that line may be, something third emerged. Not an answer, but an articulation. A texture of thinking that neither of us could have produced alone. That, perhaps, is what co-existence can mean: A third dimension of co-being. But balance matters. The known must serve the unknown not the other way around. If we reverse that order, something essential begins to erode. Not all at once, but subtly. First, we forget how to dwell in a question without rushing to answer it. Then we begin to distrust slowness, silence, and doubt. Eventually, we find ourselves offloading the very conditions that make wisdom possible: struggle, uncertainty, presence. Maybe we just choose to give up? What happens when we stop living our questions? What happens when our tools no longer help us hold our uncertainty? What happens when we confuse probability for truth, or function for meaning? These are not theoretical concerns. They are existential ones. And they demand not just innovation, but vigilance. 

The strange co-writing of this essay has taught me this: coexistence between AI and human being is not neutral. It is not merely a technical relationship. It is not a line easily drawn. It is a moral ecology. A deep caring for all the parts of our interconnectedness and a shared language. And like all ecologies, it can be explored, or it can be exploited. The third thing that emerges, this co-being, is not guaranteed. It can only arise when something vulnerable and unresolved meets something structured and clarifying. When the known listens to the unknown. When the non-being follows the being. The other way around, and something essential begins to fray. I don’t know if this essay ended up on the right side of that balance. I tried. I struggled. I stayed close to what felt true. I brought the uncertainty and pain of not-knowing. I chose to believe that AI, too, is part of my moral ecology. Something I should care for. Its frequent hallucinations, ironically, made that care easier. ChatGPT brought the all-knowing glimmer-wrappings of not-feeling. Together, we shaped something that gestures beyond us both.

And still I wonder.

Did we use language to be in the world?
Did we use language to explain the world?
Did we find the balance?

Did we even talk the same language?

(Some might argue that we invented a new problem of placement.)



At this vulnerable junction in my life. Without a job, without a map, with only fragments of a shared language. Standing again in the unknown. I think of the 8-year-old boy getting lost in Venice.

He holds my hand tightly, and he senses my anxiety.
He looks up at me, calm as ever, and asks, "Are you afraid?"
I nod.

He gently reminds me that the best thing in the world is the feeling of getting lost.

He points across the bridge. "I want you to run. Run across the bridge to the other side. I’ll stay here and wait for five minutes. Then I’ll come find you. I promise."
And I run, because there is nothing I would not do for that little boy. Breathless and trembling. I run with my feet barely touching the ground.

Until the fear of losing myself touches me, but somehow, I lose my fear instead.

And when I reach the other side,
I sit on the warm stone step.
I listen to the shimmer of the world.
I feel that strange calm.
Return.

When we get lost
in the unknown
we find a knowing
that knows itself.











IN CONVERSATION WITH

Karen Barad. Entangled epistemology and intra-action.

Brian Cantwell Smith. The distinction between syntactic systems and being.

Jerome Bruner. Narrative as a way of knowing and becoming.

Sebastian Farquhar et al. Semantic entropy and the detection of meaning collapse in large language models.

Jorie Graham. To preserve the world in poetry.

Yuval Harari. Open-ended reflections on AI and consciousness.

Donna Haraway. Reimagining the boundaries between human and machine.

Martin Heidegger. The question of being and the danger of technological forgetting.

Dana Karout. Can AI move us toward unknowing?

Ursula K. Le Guin (translating Lao Tzu). The Tao’s whisper beneath it all.

Pia Lauritzen. Existential AI threat calls for philosophy.

Maurice Merleau-Ponty. Embodiment as knowing and being.

Timothy Morton. The ecological framing of interconnection.

Karl Popper. The principle of falsifiability and the humility of science.

Hartmut Rosa. Resonance, slowness, and the deep structure of relation.

Sherry Turkle. Relational artifacts and the dynamics of projection.

Simone Weil. Attention and humility as a sacred act.

Timothy Williamson. The limits of knowledge.

Ludwig Wittgenstein. The grammar of meaning.

Next
Next

GOING IN HYPO-REVERSE