It is not an extreme position to want to be able to think for yourself, to write by just typing words in, to learn things from other people or to make art with your hands.
It is not an extreme position to want to be able to think for yourself, to write by just typing words in, to learn things from other people or to make art with your hands.
Having developed a means of capturing K-12 teachers with its Google-certified educator program, Google sets its sights on higher ed w/ its faculty AI fellowship, where you can "[b]ecome an Ambassador, guiding your peers and the academic community through the evolving landscape of higher education."
The pro-AI people want to talk about data, but never about all the scientific research that shows AI usage produces worse learning outcomes: www.aicaution.ca/why/
The connections between AI and psychosis are growing every day.
We do not need this in BC classrooms.
The risks outweigh the benefits.
Many people have been using our interactive crash map, which plots all crashes leading to serious injuries or fatalities since 2023.
We just did a huge revamp of it, and introduced the ability to filter by location, demographic, mode share, and more. Check it out! visionzerovancouver.ca/crash-map/
He also conflates using general-purpose text-generation tools like ChatGPT with deep learning that can legitimately be used to identify cancer in images. Whoops.
Microsoft and OpenAI's ideal future is one in which people's ability to reason is so disrupted that they must pay a subscription in order to think.
Students that unable to write, unable to think without the aid of "AI" are subscribers for life.
Microsoft and Google know this, and their aggressive investments in education make it clear.
Students that unable to write, unable to think without the aid of "AI" are subscribers for life.
Microsoft and Google know this, and their aggressive investments in education make it clear.
Microsoft and OpenAI's ideal future is one in which people's ability to reason is so disrupted that they must pay a subscription in order to think.
He posted about using them a few weeks ago
tante.cc/2026/02/20/a...
He isn't recommending students use LLMs to write? But he uses LLMs, surely students should use them too?
Maybe we should keep it away from kids then?
every quantitative measure is actually a stack of qualitative assumptions in a trenchcoat
This technology is fundamentally insecure.
No matter what how safe Microsoft claims it is, everything students type into Copilot goes to a server somewhere, and can be leaked.
@vsb39.bsky.social what is your plan for when student personal data gets leaked?
I'm suing Grammarly over its paid AI feature that presented editing suggestions as if they came from me - and many other writers and journalists - without consent.
State law requires consent before someone's name can be used for commercial purposes.
www.wired.com/story/gramma...
This paper on how people "moralize" when talking about AI is a fascinating microcosm of the mindset that seems to be more and more common in pro-AI academia:
there was also a Stanford study that found that a surge in AI slop by lazy people actually created significantly more work for others who had to decipher and correct it
this "I'm being efficient" (but not really) vibes well with the American obsession with artifice and appearing productive
The AI image, comparisons with anti-vaxx make it clear that the lab is extremely pro-AI. The student who wrote the paper wouldn't have been allowed to publish anything critical of AI.
They quote statistics that young people are more anti-AI than those 55 or older.
They say this is because young people are around negative influences more.
Not because they will live and work in a world affected by AI for the their entire life, whereas retirees won't.
Environmental harm, causing psychosis, deskilling, centralizing power, destroying jobs, creating false information, etc. etc. are not mentioned. Maybe those count as moralizing?
In many places in the paper and the poster they assert that people refuse to use AI even when AI is "better" or that it is "beneficial".
There is no citation of why it is better, or who it is better for. AI is "a priori" just... "Better".
Instead they frame both anti-AI and anti-vaccine as similarly anti-science "woo woo" beliefs that should be derided.
First, in the wonderful AI-slop illustration they choose to associate anti-AI stances to anti-vaccine and anti-GMO beliefs.
They could have picked people who refuse to use Amazon, or vegetarians, who similarly make lives harder for themselves, but do so for moral reasons.
This paper on how people "moralize" when talking about AI is a fascinating microcosm of the mindset that seems to be more and more common in pro-AI academia:
Google is buying its way into education by paying for training.
Vancouver is susceptible to the same kinds of tactics from other AI corporations.
iste.org/news/iste-as...
He is not a professional person. He tries to get more notoriety by revealing that his terrible opinions were actually written by AI. Just another AI zealot.
NEW: Lawmakers in 16 states introduced bills this year to cap how much time students are allowed to use laptops in school, or which apps schools can use
The ed tech industry is pushing back, insisting educational screen time isn't the same as recreational screen use
www.nbcnews.com/news/educati...
When I joined as the head engineer of the Torment Nexus project, it was to work on fascinating technical problems and make the world a better place along the way. I am appalled to discover that the Torment Nexus would be used this way and, now that my options have vested, will be leaving the project
Being a Luddite Is Cool and All, but Have You Seen the Hilarious Tapestries These New Looms Are Making? #ai www.mcsweeneys.net/articles/bei...