10 Comments
User's avatar
Jeremy Singer's avatar

Back in the late 1950’s, when my Uncle Bob told me about computers (I was 7 years old) I got the impression that computers could do anything, if you could instruct it properly.

I had the idea of making my own primitive computer by taking note pad, and putting answers to questions on many pages, and then you could just have some magic way of looking up the answer by just addressing it with the right question. The first small version of a Large Language Model was born.

I no longer have that notepad.

Later, I read a book that mentioned how physicists had figured out an heuristic to make their equations, that had inconvenient infinities, could be made to work if they cut off series after a certain number of terms. The called it “cut off physics”.

A thing that Turing figured out was that Godel incompleteness manifested in Turing machines as the halting problem - you can’t make a general purpose program that can look at any program and prove that it will eventually halt (complete).

I think we are trying to burn up all the fossil fuels to see whether AI will become generally and usefully intelligent if we will just burn enough fuel, and spend enough of our economic output towards that end.

Humans have a wonderful limitation, which is an actual secret to our intellectual abilities: we get frustrated when we can’t solve a particular obstacle in a particular way. We eventually stop, or die, or chose to do something different when a particular approach has used up too much of our time or resources.

I am confident that we will eventually switch off the power to these approaches that are not going to be the miracle we require. The bubble will burst, until the next one.

Rainbow Roxy's avatar

Wow, the part about Sutton misrepresenting human smarts really resonated. Your analysis of his naive take on simplicty and overemphasis on brute force is spot on. It’s so important to remember the human element in AI progress. A truly insightful piece.

Straker13's avatar

Content is only a function of accumulation of experience (reinforcing neurochemical pathways) via (trusted) sensors, with post-processing in neural networks optimised to utilise differencing and relational processes, with context.

Network processing is tuned/ flavoured by emotional states, which can dramatically impact focus and speed of processing and response.

I am not aware that minds utilise statistical prediction in any way similar to LLMs. While we can ‘predict’ it is definitively ‘fuzzy’ compared to a LLM.

Our efficiency is based, in my view, on refined approximations, ‘forgetting’ / ‘ignoring’ extraneous data iteratively. Cerebellar neuronal networks contribute to efficiency, almost 70 billion neurones with up to 100,000 dendritic connections per cell in massively parallel arrays, the cerebrum (that we think of as the main event of our brain) is less than 20 billion neurones, divided into specialist zones.

20 watts to run this, as you noted, to unravel the mysteries of the Universe, appreciate our material world, create, invent, love, care, empathise with other minds. Disappointing that we so often don’t.

S Auer's avatar

I have been tinkering with different AI models. Some have been helpful - faster research on a topic. That said, AI should be more than 'google on steroids.'

The more I learn about AI and LLMs and AI's code, I just can't seem to shake the feeling that AI is the 21st Century's alchemy.

If we throw enough time, thought and energy at this, this base metal or whatever will be transformed into gold.

Obviously, Middle Age alchemy did not transform things into gold. But it did help create the concept of chemistry. And it did create amazing things like porcelain.

Perhaps the focus on AI will help create some 21st century porcelain.

Dr Jo's avatar

I think you’ll find the Chinese invented porcelain :)

S Auer's avatar

Shhhhh…. Just don’t tell Meissen!

Bill Johnston's avatar

To your point, Gary Marcus has been banging this drum for at least the past 6 years, and was pleased to finally receive a nice note from Sutton acknowledging that ‘The Bitter Lesson’ wasn’t actually correct. Marcus continues to help the benighted world of AI come to terms with its error, more recently in this post where he lists a number of other AI specialists who are lining up to agree with him…and, by extension, you! ;)

https://garymarcus.substack.com/p/the-last-few-months-have-been-devastating?publication_id=888615&post_id=176484932&isFreemail=true&r=5gaw4y&triedRedirect=true

Norman Friend's avatar

I get your points about the limitations of current-generation AI — and who knows if AI will ever realise the hype of Altman, Zuckerberg and Sutton? (I had not heard the story of Kasparov’s defeat by Deep Blue!)

But, I wonder: you mention that you are a programmer of sorts: I too hack bits of code to solve problems that confront me. Recently I have been getting to grips with Python (having used Fortran, BASIC, Pascal, C, MC68000 assembler, Perl, R, and several others over the years). They all have their strengths—but I now think Python is the ultimate Swiss-Army knife of programming tools.

The major feature of Python is that it is so versatile — but with versatility comes complexity. I have learned that Google really helps with using Python. But what has really blown my mind is what happens when I talk to ChatGPT to solve a programming problem: it can generate a complex 500 line demo program, meeting all my specifications, within a few minutes! Cut and paste it into the PyCharm IDE and it runs right “out-of-the-box”. It would take me weeks or even months to write something functional and as reliable — if I had the time and perseverance.

I rather think AI has suddenly made coding a very poor career choice for young folks today.

My guess is that “Computer Science” as a discipline is changing almost overnight: it will become the study of AI: how to manage AI, improve it, work with it. It will become the arena where human intelligence really “integrates” with the artificial. The potential benefits for humanity are enormous — but as we already know, the downside risks of AI are also huge.

Dr Jo's avatar

I agree with a lot of your contentions, and Python is likely the least bad language around; I too tinker—my buzz is creating new, non-trivial languages from scratch. There are three catches with getting your LLM to do your work. The first is that you need to know pretty much precisely what you want—and it should be similar to what others have done before; the second is that when your LLM cocks something up, it will convincingly explain the reason incorrectly; the third is that you shouldn’t expect any innovative use of causal thinking or counterfactuals, because LLMs are fundamentally incapable of this.

A recent study found that competent, experienced programmers predicted LLMs would save them 30%, reported that this was the case, but measurement showed a 29% increase in time spent.

There’s no doubt that—given a few decades or even sooner—machines will best us at pretty much everything, but this day is not that day.

Norman Friend's avatar

All good points. I am an amateur programmer, but I know more than enough to be dangerous. My recent experience was exactly what you describe: trying to write an efficient solution to a problem others must already have solved (but which seemed obscure enough that the usual go-tos weren’t helping). I was close but the details were eluding me and I got curious to see what AI would tell me.

The amazing part was that it could spit out example code, clearly written but using constructs that were more compact and “Pythonic” than what I would have done. It provided step by step explanations, and provided “usage examples” on request. And that code runs and generates output similar to what I needed — and certainly with many fewer errors. Yes I raised a couple of points to “correct” what ChatGPT had done: it was so polite and obliging — and adjusted the solution accordingly. It is really seductive! 🙄

My point though is thst when used by a professional, LLMs have the potential to make the “learning curve” for coding really steep (as in very rapid progress!). CompSci will teach future programmers to focus on the algorithms, and let their AIs do the grunt work.