6 Comments
User's avatar
Daniel Nest's avatar

Yeah, AI being the product of us humans very much reflects our own biases, based on whatever is overrepresented in its training data.

It was true back in 2020 when facial recognition algorithms sucked at telling black people apart. And it's true with large language models and medical algorithms as well.

One can hope that at least recognizing these biases helps us develop ways to counteract them when training future AI models.

Expand full comment
Andrew Smith's avatar

Wow, I really need to visit the southwest. Other than SoCal, I've never spent any time out that way.

Okay, Vegas, but I mean, come on.

Very interesting to see where the idea that we're giving AI too much information goes. I like the concept of "addition by subtraction" very much.

Expand full comment
Rudy Fischmann's avatar

I really hope developers focus more on specific tasks over all-in-one LLMs. I’m not sure which makes more sense business-wise but I’m sure profit margins will drive which way tech goes.

Expand full comment
Andrew Smith's avatar

I think the business case right now is pointing them toward all-in-one (that's what ChatGPT seems to be doing, and I think the others will follow), but I also think there's a lot of room for specialty models. The big bucks (short term) will probably go into the all inclusive models since they can make $$ in the short term by charging subscriptions, but some of that insane amount of cash will end up on projects.

Expand full comment
Rudy Fischmann's avatar

The MIT lady got into some pretty shocking stuff AI recommended that was fortunately over ridden by humans. But basically any medical notes entered that expressed any inherent racial bias were amplified by most LLMs for some reason. And it didn’t seem to happen with white males. Super weird and is a source of continued research to figure out why it happens and how to thwart it. Basically programmers don’t even fully understand how their creations “think”. I’d think unintended consequences need to be considered more intentionally in future development.

Expand full comment
Gold Bassey Edem's avatar

You hit the nail directly with your point on data bias in AI.

Working on Aurelius has given me some insight about this (if you think getting data for white females and latinos is hard, then try getting data for Africans).

So far the best places to get this data is directly from the hospital's themself and the ethical loopholes are a nightmare.

This was a huge nightmare for us but luckily we were able to find hypothetical technical solutions to them, using a novel model architecture and RLHF.

But after this we're surely going to make conscious efforts to make data within core industries (Health, Agriculture, and Education) more accessible.

Expand full comment