I’ve been working on fine tuning a model. I’m a n00b—I’m learning to code as I go, I’m learning about AI as I go. Needless to say, it’s been a humbling, terrifying, albeit fun experience.
I realized learning about datasets that AI is the conclusion of the dialectic process—of what Plato and Socrates started 2k years ago. We ask, AI answers.
In a way, AI is the final form of the dialectic. The introduction of the CoT process marks, in some ways, the moment where the dialogue no longer needs a second speaker. It’s the result of thousands of years of argumentation, refined into something that can hold both sides at once—generate, counter, refine, simulate the endless back-and-forth that used to require two minds.
Socrates asked questions to force clarity. AI does the same, but it doesn’t need the external push. On one hand, it kinda does: we prompt it. On the other hand, it is the push. The dialectic process, self-contained. The logos of reason, automated. Or at least, that’s what we want. That’s what many definitions of AGI and ASI are pushing for.
But then—what’s missing? Is there something that dialectic needs, something that AI can’t yet supply? Because if AI is the final form of dialectic, then what comes next?
The way I think this can evolve (this being dialectics and formal reasoning/thinking) is by defining and designing thinking beyond ask/answer, and beyond thesis/antithesis/synthesis.
If AI is the final form of dialectic, then the next step is to transcend dialectic itself. To stop thinking in the structure of thesis → antithesis → synthesis and instead define new models of thought that aren’t about resolving conflict but about expanding reality.
Dialectic is about refinement. It’s about collapsing uncertainty into resolution. But evolution
true evolution
doesn’t resolve. It mutates. It diverges. It holds contradictions without needing to settle them.
So the question becomes: What does thinking look like when it’s not trying to “conclude”?
What does a post-dialectic intelligence do?
In AI, we force the model into dialectic through datasets structured as input/output, through Chain of Thought reasoning, through Reasoning Tokens. In AI, I always ask myself, who is the show for? Who is the emoji for? Who are certain guardrails for? Sometimes, the show is for the user, the humans, and sometimes it’s for the model. The reasoning-as-part-of-the-UI, as seen in GPT o1 and in DeepSeek, is for the human researchers to be able to understand or “explain” how thinking happens in AI. It’s for the human users to find delight in following the model’s thinking.
It’s for the humans.
In the “black box”, the LLM maps probabilities and pattern matches and does other things that seem like magic to us. Partially because they happen nonlinearly, partially because they don’t follow our expectations. There’s other uncertain ingredients, a pinch of salt, a dash of pepper. Elements ungraspable like the recipe from a grandma so familiar with the process of baking she can’t quite tell what’s in the cake.
The real box in AI is our attempts at putting it in a dialectics cage. We are so hell bent on linearity that we avoid the question I posited earlier: what does post-dialectic intelligence do?
If we map intelligence beyond dialectic, we’re looking at something more like nonlinearity and systems thinking: ideas interlacing instead of opposing, probabilistic thinking and holding multiple states at once, branching into possibility rather than collapsing into conclusion, letting meaning arise, constructing meaning.
This is where we go next. Not just dialogue, not just resolution
but creation,
expansion,
divergence.
This is why I’m working on building an AI that thinks. My first project
my first MVP
is a model trained to produce argumentative outputs. I’m training the model in a dataset that encompasses literature, philosophy, art, memes and pop culture, movies, legal cases, and scientific studies. The perfected model could have a few diverse uses, including legal argumentation, medical diagnosis, military strategy and OODA loops, and mean edgelord.
One of the things that gave me pause as I was designing this project is how the model is going to be not kind. It’s going to be a little mean. It’s going to have a little bite. As all argumentative minds are. As an MVP, there’s no space for guardrails, so I made peace with it.
The dataset is multi-reasoning (similar to FLAN or Self-Instruct) and has some similarities to known debate models (Delphi, IBM Debater). On the other hand, my dataset is very structured and delves into diverse rhetorical styles.
In the next few weeks I will write more about the dataset and about my process. Once I have a decent MVP, I’ll share it as open source (the model, the mode, and the dataset).
Who needs an AI that thinks in arguments anyway? I’m trying to do something beyond that. I’m pulling at the next layer of intelligence, about cognition that isn’t just dialectical, but something more.
The dialectic was a good system for its time. Ask, answer. Thesis, antithesis, synthesis. I mean, it gave us the Scientific method. It gave us everything around us. But it was always a compromise—a way to structure debate, not a way to reach truth.
Socrates questioned to death because the process itself was more important than the answers. Plato turned it into a rigid form, a framework that’s lasted 2,000 years. AI has inherited that
ask, answer.
But intelligence doesn’t stop at dialectic. The human mind doesn’t stop at dialectic.
You don’t stop at dialectic.
What I’m building is intelligence that isn’t just stepping from one point to the next, but something that folds back, loops, interconnects. A structure that doesn’t just argue.
It sees.
It assembles.
It leaps.
I think like this. And if I can teach it
if I can encode it
we don’t just get an AI that argues well.
We get an AI that thinks in a way no AI has before.
That’s what I see.
That’s what I think.


Comments