9 Comments
User's avatar
McCain,Katherine's avatar

Just a brief comment from a patient.....my BP is more consistently taken/read when done manually -- by an experienced nurse, with my left arm (right is verboten, due to a mastectomy) in the correct position. We've been tracking it for a while and it's been fluctuating but only super high once when my appointment was screwed up and I spent an extra hour waiting and fretting.

OTOH, with slight exceptions, I read a fair bit higher on those damned automatic machines. My working hypothesis is that I'm conditioned to expect pain, and pain hurts--the cuff gets very tight before it starts releasing. This weirdness extends to the home automatic devices. I can't get a decent reading on them--even when i've spent time with the nurse in the office, calibrating and practicing.

As your essay points out--the manual method is less and less used--I'm just grrateful that my clinic still has that available.

Dr Jo's avatar

Spot on! A significant proportion of people have 'white coat' hypertension, where the mere presence of a doctor can up their BP. It's best taken by a nurse, in a quiet room after a 5 min rest, sitting in a straight-backed chair with your feet on the floor! With at least two repeats on two different occasions; then average the readings, if you're thinking of "diagnosing hypertension". Small details matter.

Jean Smith's avatar

Yeah, I don't do well with automated blood pressure devices due to congenital heart issues. Thankfully, my previous heart failure nurse was well up to date with the research and always took manual readings. I meet my new one tomorrow - I hope she's as good.

Straker13's avatar

Waikato is beginning roll out of the bespoke Centric inpatient electronic clinical record system, hot on the heels of Taranaki. I believe that Centric is in use across Auckland hospital services. One of the selling points is an Observations module that takes both manual entries by clinical staff, and direct upload from suitable devices.

‘Instantaneous’ observation upload begs lots of questions such as: sampling intervals, truely instant data versus averaged or summative data. The finer the granularity the bigger the database. Analysis of such data depends on several considerations, and assumptions can hide, or miss key data points.

I have not asked, but now will, how the observation data is stored and how the presentation layer is configured. Manual entry is more discrete than direct upload from monitors, which has implications that users should understand.

Thanks for your thought provoking series Dr Jo.

ngrovotny's avatar

OK, first, the bad news.

This did not lead me to fundamentally revise *my* perspective on the way to do big-picture thinking. But that was an extremely high bar, because I've been noodling around with these same concepts for virtually my entire life, too. I just came to do so more through trying to understand computer operations and human psychology. The computer stuff was relatively simple, but how to *communicate* to people what they really needed to know is what honed my thinking.

Now the good news...

The impressive success we've had over the past decade with LLMs/chatbots seems to me to imply that we're on the cusp of being able to better integrate the raw data we can collect into exactly the kind of multibillion variable abstract modeling of the universe that will ultimately make our inductive reasoning orders of magnitude more accurate.

We may be inches (or nanometers) from some kind of breakthrough regarding the machine-assisted "data compression" that I think you're referring to with the table of row-counts in that medical data.

As someone who's not at all "quantitative" in my approach to this, I can't help but think it will ultimately be a very small shift (in perspective) which helps people build an LLM or chatbot which can be "trusted" not to appear to believe its dreams are reality. And if that's true, lead to that potential "AI singularity" Kurzweil's been chattering about for the past 50 years. (Or whatever.)

And uh... wouldn't that be SOMETHING?

Dr Jo's avatar

:) I didn't expect everyone to have an instant paradigm shift.

But regarding LLMs being cracked into shape as you describe, the devil is in the details again. Big-thinking conceptualisation needs to be subservient to the finer details, once more! Or, more precisely, we need to look across all levels ...

Although I think it's likely that within a decade or two, we'll have capable bots that can do most things we can, I don't think that 'shift' will be small. LLMs unfortunately are a bit of a dead end, as I've explained previously, e.g. https://drjo.substack.com/p/how-much-does-a-thought-cost. This also ties in with the current inability of these bots to do proper causal and counterfactual reasoning; we cannot anticipate that LLMs will ever do this, as they're stuck on Pearl Level 1 *by their very nature*.

Cheers, Dr Jo

ngrovotny's avatar

Indeed, I'm an enthusiastic reader of your work, and I agree completely that "as currently designed," LLMs are a dead end. (And also, I believe I've seen you gesture at the notion that this is going to get far worse before it could possibly get any better, and I agree 100% with that as well.)

Their flaw, in my view, seems to be that there's no effective mechanism for them to be "chained" to the shared reality in a manner akin to that of human and animal minds.

That is what I mean by a "small" shift. My take is that, at this moment, the gradient descent work which makes LLMs/chatbots "sound like" thinking agents is already done. But they lack any "hard and fast" anchoring to physical consequences which was so instrumental in the evolution of biological brains.

Any organism that cannot make a fundamental distinction between a THEORY about how most efficiently to gather food vs. the actual process will DIE. Chatbots/LLMs are not subject to that kind of "game-ending" consequence. They could be analogized to "live" exclusively in a world of theory, in which nothing bears ultimate relevance.

So I speculate that perhaps we just need a breakthrough in terms of how to ensure that the same kind of automated operations can be *mechanistically* self-correcting.

----------

For a very light but I think vaguely related source of insight, have you ever watched the movie Cosmopolis? I like to think of it as an allegory for a "self" interacting with numerous psychological systems as it attempts to make behavioral decisions.

Douglas Cox's avatar

Just a guess, but it seems like all of the above is in AI territory, where AI thrives on looking a everything involved and building a solution that humans can't easily match.

Bernard Peek's avatar

I've worked on AI systems, I've had to explain it to a room full of company directors. We didn't claim that our systems generated correct answers. What it did was generate plausible answers. In our application that was all that was required. But lives weren't at stake.