David Landy's Avatar

David Landy

@david-landy

72
Followers
82
Following
16
Posts
11.10.2024
Joined
Posts Following

Latest posts by David Landy @david-landy

This. So many people get upset about this, but getting mad at chatGPT for "lying" to you is quite •literally• like getting mad at a magic 8 ball for giving an answer to your question that looked plausible but turned out to be wrong.

19.06.2025 14:30 👍 1 🔁 0 💬 0 📌 0
Preview
Quirks of cognition explain why we dramatically overestimate the size of minority groups | PNAS Americans dramatically overestimate the size of African American, Latino, Muslim, Asian, Jewish, immigrant, and LGBTQ populations, leading to conce...

Cool paper about why we overestimate the size of minority groups!

From @brianguay.bsky.social, @tylermarghetis.bsky.social, Cara Wong, and @david-landy.bsky.social

www.pnas.org/doi/10.1073/...

02.04.2025 17:57 👍 8 🔁 5 💬 0 📌 0

I've always liked "a comathematician is a device for turning cotheorems into ffee" as an exceptionally opaque joke. You have to know BOTH category theory AND the source joke it's referencing, AND if you, it's hilarious.

28.02.2025 14:06 👍 14 🔁 4 💬 0 📌 1

Im a scientist directly because of this program--it got me my first actual skills and training, which absolutely got me my next job (thanks, IRAF), which got me to grad school...it was also an amazing research experience I could not have had otherwise. I'm gutted to see this.

28.02.2025 13:58 👍 5 🔁 0 💬 0 📌 0
Preview
Why prayer is a problem-solving practice that works | Aeon Essays Praying is a cognitive practice full of problem-solving resources. You can learn from it even if you don’t want to do it

I wrote an article on some of my dissertation research on the problem-solving power of prayer for @aeon.co and today it is published!

aeon.co/essays/why-p...

I'm proud of the article and it was fun to write for a non-academic outlet! 🤩

09.01.2025 16:05 👍 6 🔁 3 💬 1 📌 0

I think I disagree: given what ai systems are now, we are very far from accountability. But part of the whole point of science is to build theories which rely as little as possible on trust, but instead can be externally verified. The challenge is that that's hard to do with current ai systems.

27.12.2024 19:15 👍 4 🔁 1 💬 1 📌 0

I think the unfortunately fact that that seems to happen a lot is the valuable point I see you making, and, so stated, it seems right.

27.12.2024 19:09 👍 1 🔁 0 💬 0 📌 0

So I can use my own neural systems--but the validity of an interpretation can't hang on the mysteries. I can use integers--but not the truth of Goldbach's conjecture! I can use an ai system (of whatever sort), but not if my inferences hang on properties of it that I don't understand.

27.12.2024 19:09 👍 1 🔁 0 💬 1 📌 0

The distinction is between things involved in a research pipeline or pathway of developing an interpretation--which can and do involve all sorts of mysterious processes--and the chain of inferential tools which sustain and explicate an interpretation. Those should not involve mysterious properties.

27.12.2024 19:09 👍 1 🔁 0 💬 1 📌 0

I'm doing neither: I also point to the integers. They are pretty crucial to my pipelines, but I don't fully understand their properties (primes: complicated). None of the systems in my pipeline are ones I •fully• understand, which is my point here.

27.12.2024 19:09 👍 1 🔁 0 💬 1 📌 0

I think there is a valid concern here, but it's obscured. I don't •fully• understand my own mind, or those of my collaborators, but I use both for interpretation. I don't •fully• grock the integers. The problem arises when a conclusion hangs on the tool, not when it is used anywhere in a pipeline.

27.12.2024 02:41 👍 2 🔁 0 💬 1 📌 0

I mean, they can SAY anything. But they have interests that are competing, or at least the appearance, since the company's likelihood of funding future research may (at least) seem to depend on the results.

17.12.2024 05:41 👍 2 🔁 0 💬 0 📌 0

I know I'm a little late to the splash this YouGov made on BlueSky, but I want to talk about it!

The incoming 🔥hot take🔥 is: these errors are not a big deal!!!

03.12.2024 22:28 👍 38 🔁 9 💬 3 📌 2

We also did this earlier work, which I think was the prototype for the cross-question calibration approach: escholarship.org/content/qt0g..., but @brianguay.bsky.social would know more about any direct connection.

25.11.2024 19:24 👍 1 🔁 0 💬 0 📌 0
Preview
Psychonomic Bulletin & Review Psychonomic Bulletin & Review provides coverage spanning a broad spectrum of topics in cognitive psychology, including but not limited to action, perception, ...

Oh sure sorry! That image is from, I believe, a yougov article which connects to (and links to) some of our work in this space. Lots of connected projects, but the original citation is this one: link.springer.com/journal/1342...

25.11.2024 19:19 👍 0 🔁 0 💬 0 📌 0

In other work we consistently find little relation to experienced fear or exposure. Other folks have found that correcting misperceptions doesn't change beliefs. So I think this phenomenon is largely unrelated to rampant transphobia.

25.11.2024 13:57 👍 1 🔁 0 💬 0 📌 0

As one of the authors of the meta review linked there, I'd respectfully disagree with this interpretation. People give systematically weird answers about proportions, yes. But that doesn't seem to have much to do with their experience of threat, fear, or even beliefs. It's about numbers being hard.

25.11.2024 13:54 👍 2 🔁 0 💬 2 📌 0

This is exactly the mix of feelings I think we have all felt doing this work! It's not exactly groundbreaking, but it seemed important.

25.11.2024 13:25 👍 3 🔁 0 💬 0 📌 0

This.

25.11.2024 13:23 👍 0 🔁 0 💬 0 📌 0

Trust me, I know that many of the students don’t do the reading and are finding other ways to skate by, but there are still so many that are so brilliant and so thoughtful, and so eager to learn, and it doesn’t seem that much different from when I was an undergrad TBH

22.10.2024 22:59 👍 16 🔁 3 💬 0 📌 0