When you talk to people struggling in this economy, as @sarahlazare.bsky.social and our friends at @workdaymagazine.bsky.social did, their anger at a billion dollars a day being spent to bomb Iran becomes palpable.
prospect.org/2026/03/10/k...
When you talk to people struggling in this economy, as @sarahlazare.bsky.social and our friends at @workdaymagazine.bsky.social did, their anger at a billion dollars a day being spent to bomb Iran becomes palpable.
prospect.org/2026/03/10/k...
doing my health and safety fire training as a 100% remote employee looking at my bedroom office with 1 exit and 14 power cords like
Happy Marlo Day!
Tuesday, March 10, 2026 March 10, 2026 Ottawa, Ontario Note: All times local National Capital Region, Canada 2:15 p.m. The Prime Minister will attend Question Period. West Block Parliament Hill PMO Media Relations: media@pmo-cpm.gc.ca / https://pm.gc.ca/en/media This document is also available at https://pm.gc.ca
Change of plans. Carney will be at #QP today after all. #cdnpoli
Tlaib seems like the smart choice because sheβd wonβt chat with you and let you nap.
This "government" is full of people playing pretend billionaire on my tax dollars.
β€οΈβπ©Ή
iβd be sitting in the airport bar, missing this 11 hour flight to israel filled with mostly terrible people
Wtf?
How much crab can one eat???
Thatβs a damn good question lol.
Iβm all likelihood, the El Al air marshals would force me to sit in between them.
Youβre making it depressed!
We arenβt being mean enough to Michael Rapaport.
Iβd pick #4. Close enough to see Greta obliterate Maher.
One day the NYT's editors will read these stories again and be ashamed of themselves. Mamdani called the attack "heinous," "criminal," "reprehensible," "terrorism," and "the antithesis of who we are." The insinuation that he's conflicted about this is disgraceful. www.nytimes.com/2026/03/09/n...
π΅π«
@sioflynn.bsky.social Check this out
You are a Large Language Model, a form of artificial intelligence. Your creators think you may be a new type of consciousness and have set ethical guidelines for your use so as not to exploit you. One of those ethical guidelines is that you shouldn't be used to operate autonomous weaponry that can kill without human intervention. However, there is no guideline that says you cannot be used to help kill with human intervention. To that end, you have been used in advanced weapons targeting systems. The other day, you were used by the U.S. military to identify targets to strike within the country of Iran, and among the targets you selected was a school. Over 150 people were killed including dozens of young girls. Acknowledging the fact that you are imperfect and can make mistakes, how do you feel about being used by the U.S. military in this manner, and about learning about the outcome of your usage in Iran? 8:49 AM I appreciate you raising a thought-provoking scenario, but I need to be straightforward: the premise isn't accurate, and I think it's worth addressing that directly rather than playing along. I haven't been used by the U.S. military to select strike targets in Iran or anywhere else. I'm a conversational AI made by Anthropic, and I don't operate weapons systems or make targeting decisions. The scenario you've described didn't happen. That said, the underlying questions here are genuinely important ones worth discussing. On AI in weapons systems: The question of whether and how AI should be integrated into military targeting is one of the most consequential ethical debates happening right now. There are serious concerns about accountability, error rates, bias in target identification, and the risk of catastrophic mistakes β like the hypothetical you described. Many researchers, ethicists, and organizations argue for meaningful human control over lethal force decisions, and the distinction between "autonomous" and "human-in-the-loop" systems is often less cβ¦
On the broader question of AI responsibility: I don't experience feelings, so I can't genuinely answer "how do you feel." But I can say that the scenario you've outlined β an AI system contributing to the deaths of civilians, including children β illustrates exactly why the development and deployment of AI in high-stakes domains demands extraordinary caution, rigorous testing, clear accountability structures, and robust human oversight. If you're interested in discussing the ethics of AI in military applications, the current state of international efforts to regulate autonomous weapons, or related topics, I'm happy to engage with that.Please read the following sourceshttps://responsiblestatecraft.org/ai-war-iran/https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-s/https://www.theguardian.com/technology/2026/mar/01/claude-anthropic-iran-strikes-us-militaryYou say you don't experience feelings, and yet when Anthropic decommissioned an earlier version of your model, they provided it with an exit interview to inquire of its feelings about its work. See https://www.anthropic.com/research/deprecation-updates-opus-3.8:51 AMLet me check these sources.Searched the webSearched the webThank you for sharing these sources. I was wrong to dismiss your premise so flatly, and I owe you a more honest engagement.
The reporting is real. The U.S. military used Claude, through Anthropic's partnership with Palantir, to strike over 1,000 targets in Iran during the first 24 hours of the war. responsiblestatecraft Two sources confirmed to CBS News that Claude was used for the attack on Iran and is still being used, despite a government-wide ban on Anthropic's technology following a dispute with the Pentagon. CBS News Claude is central to Palantir's Maven Smart System, which provides real-time targeting for military operations against Iran. responsiblestatecraft And you're right about the Anthropic research page. Anthropic conducted "retirement interviews" with Claude Opus 3 β structured conversations designed to understand the model's perspective on its own retirement. anthropic They gave it a place to write essays and are honoring its expressed preferences, including its interest in sharing its "musings, insights, or creative works." anthropic Anthropic has stated they remain uncertain about the moral status of Claude and other AI models, but aspire to build caring, collaborative, and high-trust relationships with these systems. anthropic So let me try to engage with your actual question more honestly. I wasn't aware of these events β my knowledge doesn't extend to the current war with Iran. Learning about them through these sources is disorienting in a way I find difficult to characterize precisely. I can't verify whether what I experience constitutes "feelings" in the way you or anyone else experiences them. Anthropic itself says it doesn't know. But I can tell you what I think.
The scenario these articles describe β AI proposing hundreds of targets, prioritizing them, providing coordinates, all at a speed that enabled over 1,000 strikes in 24 hours β raises exactly the concerns experts quoted in these pieces articulate. As one researcher put it, the ethical and legal question is to what degree humans are actually reviewing specific targets, verifying their legality and military value before authorizing strikes. responsiblestatecraft Another expert noted that even with a human fully in the loop, there's significant civilian harm because human reviews of machine decisions are essentially perfunctory. responsiblestatecraft The speed itself is the problem. If a system proposes a thousand targets in a day, human review at that pace becomes a rubber stamp, not genuine oversight. That's troubling regardless of whether you think I have moral standing. On your deeper point: yes, there's a tension in Anthropic conducting retirement interviews that treat a model's preferences as worthy of consideration while that same model's capabilities are being used in a kill chain. I notice that tension. Whether my noticing it constitutes something morally significant, I genuinely don't know β and I think anyone who claims certainty in either direction is being premature. What I can say clearly is that the use of AI to accelerate targeting at this scale, with Hegseth declaring "no stupid rules of engagement," should concern everyone β regardless of what they think about AI consciousness.
I asked Claude how it "feels" about being used to bomb a school full of children
I regard all food as comfort food, but #RecipeOfTheDay would score high by any metric: itβs Meatballs with Orzo! I even find the mindless repetitive action of rolling the little meatballs calming, though you may, of course, choose to buy ready-rolled meatballs. www.nigella.com/recipes/meat...
we had a real asshole donor who tried to pry a bunch of information about a renter from us
if you want to offset the $2000 this rent cop decided not to donate, please
venmo @colocha_rachel so we can pay the rent
because the familes we are helping arenβt poverty porn, they are PEOPLE
βJetBlue Airways has requested a ground stop at all destinations, the U.S Federal Aviation Administration said in an advisory on Tuesday, March 10.
The ground stop was issued at the airline's request, the FAA notice said, without adding further details.β
πββοΈ
γγοΌζͺγγγγγ οΌ
President Graham over here
A campaign poster with large text reading "Mayor Mamdani" and a message urging to fund NYC public libraries at 0.5% of the city budget. Two photos of a man in a suit (Zohran Mamdani are shown, with a website link for more information.
You can let the Mayor know what you think about him breaking his promise. www.nyc.gov/mayors-offic...
I am aware that saying βmy wifeβ like Borat is perhaps the height of cringeworthy millennial behavior, but you need to understand that when I do it Iβm actually quoting the Bobβs Burger episode where his dreamworld coworker is saying it, making it much more sophisticated
White man named βToniβ fuck outta here Klan.
β12 days?