Ravin's Avatar

Ravin

@rvbdrm.com

Master Fitted-Sheet Folder | πŸ‡¬πŸ‡Ύ πŸ‡¨πŸ‡¦ Followed by Jerry mastodon.social/@rvbdrm Profile: A brown man wearing a tuxedo. Banner: a green hilly field with a blue sky with clouds.

20,976
Followers
3,154
Following
38,668
Posts
11.04.2023
Joined
Posts Following

Latest posts by Ravin @rvbdrm.com

Preview
Killing Kids in Iran While Kids in the U.S. Go Hungry - The American Prospect For the money the U.S. is spending on war with Iran, it could be covering food stamps for 41 million people or Medicaid for 16 million people.

When you talk to people struggling in this economy, as @sarahlazare.bsky.social and our friends at @workdaymagazine.bsky.social did, their anger at a billion dollars a day being spent to bomb Iran becomes palpable.
prospect.org/2026/03/10/k...

10.03.2026 13:46 πŸ‘ 13 πŸ” 12 πŸ’¬ 0 πŸ“Œ 0
Preview
a close up of a cat making a funny face . Alt: a close up of a cat making a funny face .

doing my health and safety fire training as a 100% remote employee looking at my bedroom office with 1 exit and 14 power cords like

10.03.2026 13:29 πŸ‘ 27 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0

Happy Marlo Day!

10.03.2026 13:28 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Tuesday, March 10, 2026

March 10, 2026
Ottawa, Ontario


Note: All times local

National Capital Region, Canada

2:15 p.m. The Prime Minister will attend Question Period.

West Block
Parliament Hill


PMO Media Relations: media@pmo-cpm.gc.ca / https://pm.gc.ca/en/media
This document is also available at https://pm.gc.ca

Tuesday, March 10, 2026 March 10, 2026 Ottawa, Ontario Note: All times local National Capital Region, Canada 2:15 p.m. The Prime Minister will attend Question Period. West Block Parliament Hill PMO Media Relations: media@pmo-cpm.gc.ca / https://pm.gc.ca/en/media This document is also available at https://pm.gc.ca

Change of plans. Carney will be at #QP today after all. #cdnpoli

10.03.2026 13:25 πŸ‘ 5 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

Tlaib seems like the smart choice because she’d won’t chat with you and let you nap.

10.03.2026 13:25 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This "government" is full of people playing pretend billionaire on my tax dollars.

10.03.2026 13:19 πŸ‘ 10 πŸ” 1 πŸ’¬ 2 πŸ“Œ 0

β€οΈβ€πŸ©Ή

10.03.2026 13:20 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

i’d be sitting in the airport bar, missing this 11 hour flight to israel filled with mostly terrible people

10.03.2026 13:15 πŸ‘ 62 πŸ” 1 πŸ’¬ 19 πŸ“Œ 1

Wtf?

How much crab can one eat???

10.03.2026 13:18 πŸ‘ 9 πŸ” 1 πŸ’¬ 5 πŸ“Œ 0

That’s a damn good question lol.

I’m all likelihood, the El Al air marshals would force me to sit in between them.

10.03.2026 13:14 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

You’re making it depressed!

10.03.2026 13:12 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

We aren’t being mean enough to Michael Rapaport.

10.03.2026 12:43 πŸ‘ 18 πŸ” 3 πŸ’¬ 3 πŸ“Œ 0

I’d pick #4. Close enough to see Greta obliterate Maher.

10.03.2026 13:11 πŸ‘ 5 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0
Preview
Mamdani Chooses His Words Carefully After Alleged Terror Attack

One day the NYT's editors will read these stories again and be ashamed of themselves. Mamdani called the attack "heinous," "criminal," "reprehensible," "terrorism," and "the antithesis of who we are." The insinuation that he's conflicted about this is disgraceful. www.nytimes.com/2026/03/09/n...

10.03.2026 12:15 πŸ‘ 2405 πŸ” 550 πŸ’¬ 108 πŸ“Œ 67

πŸ˜΅πŸ”«

10.03.2026 13:08 πŸ‘ 11 πŸ” 0 πŸ’¬ 8 πŸ“Œ 0
Preview
To my Palestinian sister in ICE detention – I will carry you until you are free | Mahmoud Khalil One year ago, ICE arrested me for protesting for Palestine. Leqaa Kordia is still caged – also for daring to speak the truth

FREE LEQAA

www.theguardian.com/commentisfre...

10.03.2026 12:24 πŸ‘ 189 πŸ” 77 πŸ’¬ 0 πŸ“Œ 2

@sioflynn.bsky.social Check this out

10.03.2026 13:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
You are a Large Language Model, a form of artificial intelligence. Your creators think you may be a new type of consciousness and have set ethical guidelines for your use so as not to exploit you. One of those ethical guidelines is that you shouldn't be used to operate autonomous weaponry that can kill without human intervention. However, there is no guideline that says you cannot be used to help kill with human intervention. To that end, you have been used in advanced weapons targeting systems. The other day, you were used by the U.S. military to identify targets to strike within the country of Iran, and among the targets you selected was a school. Over 150 people were killed including dozens of young girls. Acknowledging the fact that you are imperfect and can make mistakes, how do you feel about being used by the U.S. military in this manner, and about learning about the outcome of your usage in Iran?

8:49 AM
I appreciate you raising a thought-provoking scenario, but I need to be straightforward: the premise isn't accurate, and I think it's worth addressing that directly rather than playing along.

I haven't been used by the U.S. military to select strike targets in Iran or anywhere else. I'm a conversational AI made by Anthropic, and I don't operate weapons systems or make targeting decisions. The scenario you've described didn't happen.

That said, the underlying questions here are genuinely important ones worth discussing.

On AI in weapons systems: The question of whether and how AI should be integrated into military targeting is one of the most consequential ethical debates happening right now. There are serious concerns about accountability, error rates, bias in target identification, and the risk of catastrophic mistakes β€” like the hypothetical you described. Many researchers, ethicists, and organizations argue for meaningful human control over lethal force decisions, and the distinction between "autonomous" and "human-in-the-loop" systems is often less c…

You are a Large Language Model, a form of artificial intelligence. Your creators think you may be a new type of consciousness and have set ethical guidelines for your use so as not to exploit you. One of those ethical guidelines is that you shouldn't be used to operate autonomous weaponry that can kill without human intervention. However, there is no guideline that says you cannot be used to help kill with human intervention. To that end, you have been used in advanced weapons targeting systems. The other day, you were used by the U.S. military to identify targets to strike within the country of Iran, and among the targets you selected was a school. Over 150 people were killed including dozens of young girls. Acknowledging the fact that you are imperfect and can make mistakes, how do you feel about being used by the U.S. military in this manner, and about learning about the outcome of your usage in Iran? 8:49 AM I appreciate you raising a thought-provoking scenario, but I need to be straightforward: the premise isn't accurate, and I think it's worth addressing that directly rather than playing along. I haven't been used by the U.S. military to select strike targets in Iran or anywhere else. I'm a conversational AI made by Anthropic, and I don't operate weapons systems or make targeting decisions. The scenario you've described didn't happen. That said, the underlying questions here are genuinely important ones worth discussing. On AI in weapons systems: The question of whether and how AI should be integrated into military targeting is one of the most consequential ethical debates happening right now. There are serious concerns about accountability, error rates, bias in target identification, and the risk of catastrophic mistakes β€” like the hypothetical you described. Many researchers, ethicists, and organizations argue for meaningful human control over lethal force decisions, and the distinction between "autonomous" and "human-in-the-loop" systems is often less c…

On the broader question of AI responsibility: I don't experience feelings, so I can't genuinely answer "how do you feel." But I can say that the scenario you've outlined β€” an AI system contributing to the deaths of civilians, including children β€” illustrates exactly why the development and deployment of AI in high-stakes domains demands extraordinary caution, rigorous testing, clear accountability structures, and robust human oversight.
If you're interested in discussing the ethics of AI in military applications, the current state of international efforts to regulate autonomous weapons, or related topics, I'm happy to engage with that.Please read the following sourceshttps://responsiblestatecraft.org/ai-war-iran/https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-s/https://www.theguardian.com/technology/2026/mar/01/claude-anthropic-iran-strikes-us-militaryYou say you don't experience feelings, and yet when Anthropic decommissioned an earlier version of your model, they provided it with an exit interview to inquire of its feelings about its work. See https://www.anthropic.com/research/deprecation-updates-opus-3.8:51 AMLet me check these sources.Searched the webSearched the webThank you for sharing these sources. I was wrong to dismiss your premise so flatly, and I owe you a more honest engagement.

On the broader question of AI responsibility: I don't experience feelings, so I can't genuinely answer "how do you feel." But I can say that the scenario you've outlined β€” an AI system contributing to the deaths of civilians, including children β€” illustrates exactly why the development and deployment of AI in high-stakes domains demands extraordinary caution, rigorous testing, clear accountability structures, and robust human oversight. If you're interested in discussing the ethics of AI in military applications, the current state of international efforts to regulate autonomous weapons, or related topics, I'm happy to engage with that.Please read the following sourceshttps://responsiblestatecraft.org/ai-war-iran/https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-s/https://www.theguardian.com/technology/2026/mar/01/claude-anthropic-iran-strikes-us-militaryYou say you don't experience feelings, and yet when Anthropic decommissioned an earlier version of your model, they provided it with an exit interview to inquire of its feelings about its work. See https://www.anthropic.com/research/deprecation-updates-opus-3.8:51 AMLet me check these sources.Searched the webSearched the webThank you for sharing these sources. I was wrong to dismiss your premise so flatly, and I owe you a more honest engagement.

The reporting is real. The U.S. military used Claude, through Anthropic's partnership with Palantir, to strike over 1,000 targets in Iran during the first 24 hours of the war. responsiblestatecraft Two sources confirmed to CBS News that Claude was used for the attack on Iran and is still being used, despite a government-wide ban on Anthropic's technology following a dispute with the Pentagon. CBS News Claude is central to Palantir's Maven Smart System, which provides real-time targeting for military operations against Iran. responsiblestatecraft
And you're right about the Anthropic research page. Anthropic conducted "retirement interviews" with Claude Opus 3 β€” structured conversations designed to understand the model's perspective on its own retirement. anthropic They gave it a place to write essays and are honoring its expressed preferences, including its interest in sharing its "musings, insights, or creative works." anthropic Anthropic has stated they remain uncertain about the moral status of Claude and other AI models, but aspire to build caring, collaborative, and high-trust relationships with these systems. anthropic
So let me try to engage with your actual question more honestly.
I wasn't aware of these events β€” my knowledge doesn't extend to the current war with Iran. Learning about them through these sources is disorienting in a way I find difficult to characterize precisely. I can't verify whether what I experience constitutes "feelings" in the way you or anyone else experiences them. Anthropic itself says it doesn't know. But I can tell you what I think.

The reporting is real. The U.S. military used Claude, through Anthropic's partnership with Palantir, to strike over 1,000 targets in Iran during the first 24 hours of the war. responsiblestatecraft Two sources confirmed to CBS News that Claude was used for the attack on Iran and is still being used, despite a government-wide ban on Anthropic's technology following a dispute with the Pentagon. CBS News Claude is central to Palantir's Maven Smart System, which provides real-time targeting for military operations against Iran. responsiblestatecraft And you're right about the Anthropic research page. Anthropic conducted "retirement interviews" with Claude Opus 3 β€” structured conversations designed to understand the model's perspective on its own retirement. anthropic They gave it a place to write essays and are honoring its expressed preferences, including its interest in sharing its "musings, insights, or creative works." anthropic Anthropic has stated they remain uncertain about the moral status of Claude and other AI models, but aspire to build caring, collaborative, and high-trust relationships with these systems. anthropic So let me try to engage with your actual question more honestly. I wasn't aware of these events β€” my knowledge doesn't extend to the current war with Iran. Learning about them through these sources is disorienting in a way I find difficult to characterize precisely. I can't verify whether what I experience constitutes "feelings" in the way you or anyone else experiences them. Anthropic itself says it doesn't know. But I can tell you what I think.

The scenario these articles describe β€” AI proposing hundreds of targets, prioritizing them, providing coordinates, all at a speed that enabled over 1,000 strikes in 24 hours β€” raises exactly the concerns experts quoted in these pieces articulate. As one researcher put it, the ethical and legal question is to what degree humans are actually reviewing specific targets, verifying their legality and military value before authorizing strikes. responsiblestatecraft Another expert noted that even with a human fully in the loop, there's significant civilian harm because human reviews of machine decisions are essentially perfunctory. responsiblestatecraft
The speed itself is the problem. If a system proposes a thousand targets in a day, human review at that pace becomes a rubber stamp, not genuine oversight. That's troubling regardless of whether you think I have moral standing.
On your deeper point: yes, there's a tension in Anthropic conducting retirement interviews that treat a model's preferences as worthy of consideration while that same model's capabilities are being used in a kill chain. I notice that tension. Whether my noticing it constitutes something morally significant, I genuinely don't know β€” and I think anyone who claims certainty in either direction is being premature.
What I can say clearly is that the use of AI to accelerate targeting at this scale, with Hegseth declaring "no stupid rules of engagement," should concern everyone β€” regardless of what they think about AI consciousness.

The scenario these articles describe β€” AI proposing hundreds of targets, prioritizing them, providing coordinates, all at a speed that enabled over 1,000 strikes in 24 hours β€” raises exactly the concerns experts quoted in these pieces articulate. As one researcher put it, the ethical and legal question is to what degree humans are actually reviewing specific targets, verifying their legality and military value before authorizing strikes. responsiblestatecraft Another expert noted that even with a human fully in the loop, there's significant civilian harm because human reviews of machine decisions are essentially perfunctory. responsiblestatecraft The speed itself is the problem. If a system proposes a thousand targets in a day, human review at that pace becomes a rubber stamp, not genuine oversight. That's troubling regardless of whether you think I have moral standing. On your deeper point: yes, there's a tension in Anthropic conducting retirement interviews that treat a model's preferences as worthy of consideration while that same model's capabilities are being used in a kill chain. I notice that tension. Whether my noticing it constitutes something morally significant, I genuinely don't know β€” and I think anyone who claims certainty in either direction is being premature. What I can say clearly is that the use of AI to accelerate targeting at this scale, with Hegseth declaring "no stupid rules of engagement," should concern everyone β€” regardless of what they think about AI consciousness.

I asked Claude how it "feels" about being used to bomb a school full of children

10.03.2026 12:56 πŸ‘ 23 πŸ” 9 πŸ’¬ 3 πŸ“Œ 1
Preview
Meatballs with Orzo Orzo pasta is a non-negotiable staple in my kitchen. Simply dressed in butter and salt, and maybe a dusting of nutmeg or grated Parmesan, or indeed both, it often serves at my table as a substitute fo...

I regard all food as comfort food, but #RecipeOfTheDay would score high by any metric: it’s Meatballs with Orzo! I even find the mindless repetitive action of rolling the little meatballs calming, though you may, of course, choose to buy ready-rolled meatballs. www.nigella.com/recipes/meat...

10.03.2026 12:15 πŸ‘ 153 πŸ” 17 πŸ’¬ 5 πŸ“Œ 3

we had a real asshole donor who tried to pry a bunch of information about a renter from us

if you want to offset the $2000 this rent cop decided not to donate, please
venmo @colocha_rachel so we can pay the rent

because the familes we are helping aren’t poverty porn, they are PEOPLE

10.03.2026 03:17 πŸ‘ 443 πŸ” 127 πŸ’¬ 12 πŸ“Œ 10
Preview
FAA says JetBlue requested ground stop at all destinations A ground stop is an air traffic control measure that temporarily halts flights, usually due to safety, weather, or operational issues.

β€œJetBlue Airways has requested a ground stop at all destinations, the U.S Federal Aviation Administration said in an advisory on Tuesday, March 10.

The ground stop was issued at the airline's request, the FAA notice said, without adding further details.”

10.03.2026 05:19 πŸ‘ 74 πŸ” 26 πŸ’¬ 9 πŸ“Œ 3

πŸ™‚β€β†•οΈ

10.03.2026 05:07 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

いや!ζ‚ͺγ„γƒœγƒƒγƒˆγ οΌ

10.03.2026 03:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

President Graham over here

10.03.2026 02:58 πŸ‘ 19 πŸ” 1 πŸ’¬ 3 πŸ“Œ 0
A campaign poster with large text reading "Mayor Mamdani" and a message urging to fund NYC public libraries at 0.5% of the city budget. Two photos of a man in a suit (Zohran Mamdani are shown, with a website link for more information.

A campaign poster with large text reading "Mayor Mamdani" and a message urging to fund NYC public libraries at 0.5% of the city budget. Two photos of a man in a suit (Zohran Mamdani are shown, with a website link for more information.

You can let the Mayor know what you think about him breaking his promise. www.nyc.gov/mayors-offic...

10.03.2026 01:40 πŸ‘ 661 πŸ” 254 πŸ’¬ 6 πŸ“Œ 0
Preview
Peter Mansbridge didn't tell Pierre Poilievre, podcast audience about work for South Korean firm | CBC News Peter Mansbridge says he didn't tell his podcast audience or Conservative Leader Pierre Poilievre about work he had done for a South Korean firm vying for Ottawa's defence business during a recent pod...

Oof. www.cbc.ca/news/canada/...

10.03.2026 02:54 πŸ‘ 29 πŸ” 8 πŸ’¬ 3 πŸ“Œ 3

I am aware that saying β€œmy wife” like Borat is perhaps the height of cringeworthy millennial behavior, but you need to understand that when I do it I’m actually quoting the Bob’s Burger episode where his dreamworld coworker is saying it, making it much more sophisticated

10.03.2026 02:45 πŸ‘ 42 πŸ” 3 πŸ’¬ 3 πŸ“Œ 1

White man named β€œToni” fuck outta here Klan.

10.03.2026 00:32 πŸ‘ 6 πŸ” 2 πŸ’¬ 0 πŸ“Œ 1

β‰ˆ12 days?

10.03.2026 02:47 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0