Submit your work on causality and networks today!
Submit your work on causality and networks today!
It's **8** letters at Purdue!!!
Hereβs a full draft of the upcoming second edition of my βData Visualization: A Practical Introductionβ: socviz.co
Super excited to see my first R package `imaginarycss' in CRAN (cran.r-project.org/web/packages...) #R #Network #SNA
Itβs actually FEWER MisΓ©rables
I'm begging my fellow politicians, Illinoisans, and Americans to realize that right now, we aren't fighting over policy or political party.
We're fighting over whether we're going to be a country rooted in empathy and kindness β or one rooted in cruelty and rage.
Stop sharing the βour AI agent made up dataβ reddit story. It is as fake as the AI you think you are critiquing & the people sharing it have done zero due dilligence ππππ
Cool/important job alert:
Principal Research Scientist at the Wikimedia Foundation's
Research team for the area of knowledge integrity
Job posting: job-boards.greenhouse.io/wikimedia/jo...
Can't driving cars be like that?
Yes, that's it.
AI is very good at some things, like protein folding or (I would argue) summarizing texts or producing arguments (e.g., www.science.org/doi/10.1126/...).
It does not require that we believe that it can think (or think like humans) for these outputs to be valuable.
This is a really nice paper! Thanks for sharing
But I don't see the categorical difference between text summarization and driving?
For example, LLMs are very good at summarizing texts. We don't have to accept that they are thinking like humans to accept that this performance is adequate for many uses?
The OP sounded to me like an argument that driving was impossible to succeed at based on first principles..
Correct me if I'm wrong, but the key argument is that NNs can't be shown to be human-like and indeed there are theoretical reasons to think that they are not.
I am very sympathetic to these arguments but they seem distinct from whether an AI can competently complete a given task acceptably.
But are there cases where we can get close enough? And are there ways to identify those cases?
Isn't Waymo better than people along most metrics we might care about?
I don't see a reason that driving a car wouldn't fit into this category.
What am I missing?
If the argument is that AI can never truly behave like humans until and unless it has a theory of mind, then that is a different argument (and one that is close to tautological).
I would argue that one can do many things that are useful to humans without necessarily behaving like humans.
For example, I may be missing something, but it seems like it's arguing that ML will never be able to succeed in any domain that interfaces with humans / the real world.
But that's clearly not true. For many uses (programming, summarization, argument creation) AI is very good.
That's a very formal argument that I'll admit I don't understand. I guess I'm asking for a dumbed down version or an example of what it means in practice.
My understanding is that self driving tech uses neural nets but not LLMs. Is that wrong?
Isn't there an argument that they will never be perfect without a theory of mind, but they can be good enough?
After all, humans have a theory of mind and still often crash their vehicles!
I don't think I understand the argument. Is it that people are too unpredictable without a theory of mind?
A large enough training set and complex enough ML model should be able to predict what others will do pretty well? Is the arg that training sets can never be large enough to cover all cases?
Do ordinary Republicans and Democrats really avoid each other in everyday life? In a new working paper with Delia Baldassarri, we present descriptive and experimental evidence to challenge the view that partisanship drives the formation of social relationships.
osf.io/preprints/so...
1/15
Wikipedia puzzle globe logo with two gray gears symbolizing settings or tools. Text reads: Wikipedia's automated helpers. Thousands of approved and regulated bots quietly fix links, revert vandalism, format "Talk" pages, and much more β all empowering the human community behind the worldβs free encyclopedia.
Bots help keep Wikipedia running. They fix broken links, revert vandalism, and tag pages so volunteers can focus on writing. Each bot is reviewed by the community and must be harmless and accountable.Β
Meet the bots β‘οΈβ―w.wiki/WPH
I'm very worried about AI ruining large anonymous platforms like Reddit.
My hope is that small communities survive because they aren't great targets but I'm not confident
A few weeks back I gave a talk at Stanford about some of our work on generative AI in the online social world.
The video is at www.youtube.com/watch?v=IH-h...
Preprints of the work are at:
arxiv.org/abs/2601.10754
arxiv.org/abs/2601.20100
Have any academic researchers been able to get access to Reddit data through their new "Responisble Builder Policy" and approval process?
www.reddit.com/r/redditdev/...
I gave a talk at Princeton about LLMs in the online social world - the video is now up at www.youtube.com/watch?v=ITjI...
Slides are at jeremydfoote.com/presentation...
Do you have privacy settings turned on so that cookies are deleted? Because I stay signed in to many sites