David Rodenas PhD
2 min readFeb 11, 2025

--

Hi Doug, I've been pondering for some days about your comment. And sometimes I think that you are right on this. But then, things happen, and I start to doubt.

When I was teaching at university, I noticed many students were essentially sophisticated pattern matchers - recognizing problem types and applying memorized solutions, and failing if they couldn't. In fact, they were so similar that often responses of ChatGPT 3.5 —newer versions are better— felt exactly like responses from students who did not know how to answer a question. And that was a point that Carlos Fenollosa and I discussed recently.

The line between pattern matching and understanding got surprisingly blurry for me.

And nowadays, it's difficult for me to recognize that LLMs are just pattern matching machines. They create models of reality, just like we do, based on all the inputs they had. And they respond using those models. We might argue that it's a kind of intelligence, right?

The only thing that makes me think that LLMs aren't intelligent is that they cannot learn, and as you mention, they learn from what humans created. That was until reasoning models like r1, o1 and o3 appeared. Now they are creating their own data to learn, and evolving beyond available human materials. Yet, you need a human to build the training for that.

So, while in the case of chess, I am 100% sure that isn't intelligence —it's just an algorithm like minimax—, in this case, I am not sure. Sometimes I think that we are giving the model emerging qualities that are not here, that are just our interpretation; other times I believe that something wonderful is happening here.

And, somehow, I have wanted to believe that all our humanity's knowledge has come "alive" to create something wonderful.

--

--

David Rodenas PhD
David Rodenas PhD

Written by David Rodenas PhD

Passionate software engineer & storyteller. Sharing knowledge to advance our skills. Join me on a journey of discovery in the world of software engineering.

Responses (1)