Discussion about this post

User's avatar
Chris Despopoulos's avatar

Neurology demonstrates the dependence memory (and ideation in general, BTW) has on emotional centers. No emotion, no memory - efficient filtering.

So what's emotion? Ultimately it rests on pleasure/pain and fight/flight. I discerns what's good or bad for the organism and directs it to success in a thermodynamic field. Or something like that.

My point is, memory is a fundamental part of mind, in service to the organism's success. And success is ultimately how you FEEL - in your body. No body, no mind. LLMs are pretty far from that.

As a coda, human memory is notoriously weak on the details. We remember the feeling mostly. So… feeling to filter and feeling to recall. Not sure what your plan is there.

Expand full comment
Alex Tolley's avatar

Like our non-episodic memories, LLMs compress the information. Training with added information updates the network, just as our memories do. We also have relatively poor episodic memory that AFAIK, episodes are not encoded in LLMs, but could be.

Now suppose we introduce a feedback loop in the LLM that provides predictions, that are tested against external inputs, and outputs are adjusted based on prediction error size.

An LLM should be receiving constant inputs, updating its memory, and responding to inputs whether environmental or from conversations.

When thinking, specifically reasoning rather than recalling, is this feedback a way to get to a consistent answer? [I have in my mind a convergence that we see in sparse distributed memory]

This will only be possible when the LLM can handle inputs as fast as human brains, which implies any memory formation must be rapid.

Expand full comment

No posts