Discussion about this post

User's avatar
John Jason Light's avatar

As usual, 99% brilliant analysis soured by your blind spots.

I am sorry to see your glib dismissal of the FEP and active inference, which you clearly don’t understand.

You also missed the GIGO problem of deep learning. As the knowledgeverse continues to be polluted by the output of LLMs, either the pollution will grow exponentially as a portion of the total, or the cost of preprocessing knowledge to eliminate pollution will vastly increase the exponent cost of all deep learning, or we settle for a world where we just don’t know what the pollution level is. (I suspect the AI tools that scan for LLM text will be come less useful as we train them on polluted information.)

Expand full comment
Stephen Webb's avatar

Personally, I rather enjoy your generalist approach, so - that would get my vote. You tend to peak my natural curiosity about topics I might not otherwise delve into, and also provide extra insight into topics I am already interested in. Not to minimise the health care topics, because they often involve deeper philosophic issues and you have a good grasp on the implications.

I’m glad that you found the article by Anthropic of use, they also cover a lot of related problems in other papers and seem to support transparency quite strongly. Since you are interested, as am I, in that area and in the question of ‘what could possibly go wrong’ with the blind driving force to implement as much as possible in as short a time as possible, you could also check the following:

www.lesswrong.com and their sister site www.alignmentforum.org.

Mr. Light’s note pointing to the article on Subliminal Learning is also a good suggestion. The original paper (linked in the article) is a good read and meshes with the misalignment issue closely. OpenAI has done some work on trying to address misalignment issues, but caution that, if misalignment surveillance is detected by the AI, it may shift from chain-of-thought to hide it’s behaviour - possibly to subliminals ? …

The positive thing is that from the probability study on AGI potentials, it looks like there may be sufficient time to get some of the worst issues solved before things get past the tipping point. IFF we do not encounter an “intelligence explosion” due to AIs doing AI R&D (which could cause an exponential growth in AI capability). But that’s another story….

Expand full comment
13 more comments...

No posts