Discovered: Aug 18, 2025 18:43 (UTC)ME::tl;dr-ing:: We don’t have the mathematics to be able to build real artificial general intelligence ; Django Beatty:: Alchemy 2: Electric Boogaloo

QUOTE

Mathematics of Impossible Things

Scientists recently mapped one cubic millimeter of human brain tissue - a piece the size of a grain of sand. It took 1.4 petabytes of data. Not to simulate it. Not to model it. Just to store a static 3D photograph.

One grain of sand worth of brain tissue. Frozen. Dead. Not even trying to capture how it works - just what it looks like. And it takes more data than Netflix uses to stream to a small country.

Inside that grain? 57,000 cells. 150 million synapses. Each synapse is a chemical factory with over 1,000 different proteins. Each protein changes shape and function based on what’s happening around it. And that’s just the photograph. The frozen moment. To model how it actually works? We don’t even know where to start.

This is where the dream of artificial general intelligence hits a wall that nobody talks about. Years ago, I was discussing AGI with my friend Dr. Roman Belavkin - a researcher in cognitive science at Middlesex University. Over coffee, he explained the problems with modeling biological neurons that should be headline news but instead stay buried in academic conferences.

I needed to understand: were these problems speed bumps or brick walls?

The more Roman explained, the clearer it became to me. These aren’t speed bumps. They’re mathematical voids.

The problem starts with backpropagation.

Let me explain what that means, because it’s the heart of why AGI is impossible with current mathematics.

Most deep learning systems today - ChatGPT, Claude, Gemini - rely on an algorithm called backpropagation. Think of it like teaching a child: you show them a picture of a cat, they guess “dog,” you tell them how wrong they were, and they adjust their mental model. Do this millions of times and they learn to recognize cats.

The 1943 Problem

Why don’t we just model real neurons? While we can model neurons at many fidelities - the best are still cartoon sketches. The original McCulloch-Pitts model from 1943 used simple on/off switches with step functions - not differentiable, couldn’t work with backpropagation (which hadn’t been invented yet). In the 1980s, to make backpropagation work, we replaced the step functions with smooth curves. But we’re still using the same basic framework: weighted sums, linear algebra, one-dimensional signals.

If we tried to model biological detail - the neurotransmitters, the dendritic processing, the living cellular machinery - we don’t even have the mathematics to describe it, let alone compute it. A single biological neuron isn’t a switch - it’s a living cell, a chemical factory floating in a soup of neurotransmitters, hormones, and proteins.

When a signal arrives at a neuron, it doesn’t just flow through like electricity through a wire. The signal arrives at branches called dendrites, and here’s where everything breaks: each branch does its own processing. Not simple addition or multiplication - complex, unpredictable transformations that we struggle to describe mathematically.

Imagine you’re trying to predict the flow of a river. But this river has thousands of tributaries, and each tributary follows its own rules - some flow uphill, some disappear underground and reappear elsewhere, some spontaneously change direction based on the phase of the moon. Now try to write an equation for where a drop of water will end up. That’s what we’re trying to do with neurons.

Leave a comment on github