Trending

#AIRisks

Latest posts tagged with #AIRisks on Bluesky

Latest Top
Trending

Posts tagged #AIRisks

Preview
The AI Attack Surface Why AI Tools Are Creating New Security Risks The AI Attack Surface Why AI Tools Are Creating New Security Risks

🔐🤖 The AI Attack Surface Is Expanding Faster Than Most Security Teams realize. While they improve efficiency, they also introduce new cybersecurity risks like prompt injection, data leakage, and model manipulation.
#Cybersecurity #AISecurity #AIrisks #CyberLens

www.thecyberlens.com/p/the-ai-att...

0 1 0 0
Preview
Shadow AI Risks Rise as Employees Use Generative AI Tools at Work Without Oversight  With speed surprising even experts, artificial intelligence now appears routinely inside office software once limited to labs. Because uptake grows faster than oversight, companies care less about who uses AI and more about how safely it runs.  Research referenced by security specialists suggests that roughly 83 percent of UK workers frequently use generative artificial intelligence for everyday duties - finding data, condensing reports, creating written material. Because tools including ChatGPT simplify repetitive work, efficiency gains emerge across fast-paced departments. While automation reshapes daily workflows, practical advantages become visible where speed matters most.  Still, quick uptake of artificial intelligence brings fresh risks to digital security. More staff now introduce personal AI software at work, bypassing official organizational consent. Experts label this shift "shadow AI," meaning unapproved systems run inside business environments.  These tools handle internal information unseen by IT teams. Oversight gaps grow when such platforms function outside monitored channels. Almost three out of four people using artificial intelligence at work introduce outside tools without approval.  Meanwhile, close to half rely on personal accounts instead of official platforms when working with generative models. Security groups often remain unaware - this gap leaves sensitive information exposed. What stands out most is the nature of details staff share with artificial intelligence platforms. Because generative models depend on what users feed them, workers frequently insert written content, programming scripts, or files straight into the interface.  Often, such inputs include sensitive company records, proprietary knowledge, personal client data, sometimes segments of private software code. Almost every worker - around 93 percent - has fed work details into unofficial AI systems, according to research. Confidential client material made its way into those inputs, admitted roughly a third of them.  After such data lands on external servers, companies often lose influence over storage methods, handling practices, or future applications. One real event showed just how fast things can go wrong. Back in 2023, workers at Samsung shared private code along with confidential meeting details by sending them into ChatGPT. That slip revealed data meant to stay inside the company.  What slipped out was not hacked - just handed over during routine work. Without strong rules in place, such tools become quiet exits for secrets. Trusting outside software too quickly opens gaps even careful firms miss. Compromised AI accounts might not only leak data - security specialists stress they may also unlock wider company networks through exposed chat logs.  While financial firms worry about breaking GDPR rules, hospitals fear HIPAA violations when staff misuse artificial intelligence tools unexpectedly. One slip with these systems can trigger audits far beyond IT departments’ control. Bypassing restrictions tends to happen anyway, even when companies try to ban AI outright.  Experts argue complete blocks usually fail because staff seek workarounds if they think a tool helps them get things done faster. Organizations might shift attention toward AI oversight methods that reveal how these tools get applied across teams.  By watching how systems are accessed, spotting unapproved software, clarity often emerges around acceptable use. Clear rules tend to appear more effective when risk control matters - especially if workers continue using innovative tools quietly. Guidance like this supports balance: safety improves without blocking progress.

Shadow AI Risks Rise as Employees Use Generative AI Tools at Work Without Oversight #AIRisks #AISecurity #AItechnology

0 0 0 0
Preview
THE GREAT SILENCE » tmack It has been precisely seven days since the "Information Super-Highway" suffered a head-on collision with a tortoise named Speedy.

A DISPATCH FROM THE AGE OF ENLIGHTENMENT
By Mark Twain

It has been precisely seven days since the “Information Super-Highway” suffered a head-on collision with a tortoise named Speedy.

#AIRisks #SecureAI
1bluebass.com/?p=365...

0 0 0 0
Just a moment...

Is an AI catastrophe on the horizon? Discover the clash between Anthropic and the U.S. government over AI safety and security! #AIrisks

www.economist.com/podcasts/2026/03/11/ai-c...

0 0 0 0
Preview
Anthropic forms institute to study long-term AI risks facing society - Help Net Security Anthropic has established the Anthropic Institute, a research unit focused on studying the societal effects of AI.

Anthropic forms institute to study long-term AI risks facing society

🔗 Read more: www.helpnetsecurity.com/2026/03/11/a...

#Anthropic #AI #AIRisks

0 0 0 0
Video

AI Security & Compliance: The Key to Protecting Your Data
AI security is more crucial than ever. By 2026, 97% of organizations will report GenAI security breaches.
#AIsecurity #DataProtection #GenerativeAI #Compliance #Technijian #AIGovernance #AIrisks

0 0 0 0
Home | Independent International Scientific Panel on AI

The UN has put together a panel to "produce an annual report with evidence-based scientific assessments related to the opportunities, risks and impacts of artificial intelligence...It may also prepare thematic briefs on issues of concern as it deems necessary.."

#AIRisks

www.un.org/independent-...

0 0 0 0
Preview
X Grok Blocker Fails: Block Grok Photo Edits? X's new Grok blocker promises to stop Grok from editing your photos, but deep flaws leave users exposed to AI manipulation risks.

X Grok Blocker Fails: Block Grok Photo Edits?
#GrokBlockerFail #AIrisks #PhotoEditing
www.squaredtech.co/x-grok-edit-...

0 0 0 0
116   Generative AI and Research Ethics
116 Generative AI and Research Ethics YouTube video by Helen Kara

New video: Generative AI & Research Ethics

AI is powerful, but it also raises serious ethical challenges. I explore these issues and why researchers need to think carefully about the tools they use.

Watch it here: youtu.be/Oj1Hl7_W18g

#AI #GenerativeAI #ResearchEthics #AIrisks

4 3 0 0
Preview
ECB Tightens Oversight of Banks’ Growing AI Sector Risks  The European Central Bank is intensifying its oversight of how eurozone lenders finance the fast‑growing artificial intelligence ecosystem, reflecting concern that the boom in data‑centre and AI‑related infrastructure could hide pockets of credit and concentration risk. In recent weeks, the ECB has sent targeted requests to a select group of major European banks, asking for granular data on their loans and other exposures to AI‑linked activities such as data‑centre construction, vendor financing and large project‑finance structures. Supervisors want to map where credit is clustering around a small set of hyperscalers, cloud providers and specialized hardware suppliers, amid global estimates of trillions of dollars in planned AI‑related capital spending. Officials stress this is a diagnostic exercise rather than an immediate step toward higher capital charges, but it marks a shift from general discussion to hands‑on information gathering. The push comes as European banks race to harness AI inside their own operations, from credit scoring and fraud detection to automating back‑office tasks and enhancing customer service. Supervisors acknowledge that these technologies promise sizeable efficiency gains and new revenue opportunities, yet warn that many institutions still lack mature governance for AI models, including robust data‑quality controls, explainability, and clear accountability for automated decisions. The ECB has repeatedly argued that AI adoption must be matched by stronger risk‑management frameworks and continuous human oversight over model life cycles. Regulators are also increasingly uneasy about systemic dependencies created by the dominance of a handful of mostly non‑EU AI and cloud providers. Heavy reliance on these external platforms raises concerns about operational resilience, data protection, and geopolitical risk that could spill over into financial stability if disruptions occur. At the same time, the ECB’s broader financial‑stability assessments have highlighted stretched valuations in some AI‑linked equities, warning that a sharp correction could transmit stress into bank balance sheets through both direct exposures and wider market channels.  For now, supervisors frame their AI‑sector review as part of a wider effort to “encourage innovation while managing risks,” aligning prudential expectations with Europe’s new AI Act and digital‑operational‑resilience rules. Banks are being nudged to tighten contract terms, strengthen model‑validation teams and improve documentation before scaling AI‑driven business lines. The message from Frankfurt is that AI remains welcome as a driver of competitiveness in European finance—but only if lenders can demonstrate they understand, measure and contain the new concentrations of credit, market and operational risk that accompany the technology’s rapid rise.

ECB Tightens Oversight of Banks’ Growing AI Sector Risks #AIRisks #BankingOversight #CreditRisk

0 0 0 0
Preview
APT36 Uses AI-Generated “Vibeware” Malware and Google Sheets to Target Indian Government Networks  Researchers at Bitdefender have uncovered a new cyber campaign linked to the Pakistan-aligned threat group APT36, also known as Transparent Tribe. Unlike earlier operations that relied on carefully developed tools, this campaign focuses on mass-produced AI-generated malware. Instead of sophisticated code, the attackers are pushing large volumes of disposable malicious programs, suggesting a shift from precision attacks to broad, high-volume activity powered by artificial intelligence. Bitdefender describes the malware as “vibeware,” referring to cheap, short-lived tools generated rapidly with AI assistance.  The strategy prioritizes quantity over accuracy, with attackers constantly releasing new variants to increase the chances that at least some will bypass security systems. Rather than targeting specific weaknesses, the campaign overwhelms defenses through continuous waves of new samples. To help evade detection, many of the programs are written in lesser-known programming languages such as Nim, Zig, and Crystal. Because most security tools are optimized to analyze malware written in more common languages, these alternatives can make detection more difficult.  Despite the rapid development pace, researchers found that several tools were poorly built. In one case, a browser data-stealing script lacked the server address needed to send stolen information, leaving the malware effectively useless. Bitdefender’s analysis also revealed signs of deliberate misdirection. Some malicious files contained the common Indian name “Kumar” embedded within file paths, which researchers believe may have been placed to mislead investigators toward a domestic source. In addition, a Discord server named “Jinwoo’s Server,” referencing a popular anime character, was used as part of the infrastructure, likely to blend malicious activity into normal online environments.  Although some tools appear sloppy, others demonstrate more advanced capabilities. One component known as LuminousCookies attempts to bypass App-Bound Encryption, the protection used by Google Chrome and Microsoft Edge to secure stored credentials. Instead of breaking the encryption externally, the malware injects itself into the browser’s memory and impersonates legitimate processes to access protected data. The campaign often begins with social engineering. Victims receive what appears to be a job application or resume in PDF format. Opening the document prompts them to click a download button, which silently installs malware on the system.  Another tactic involves modifying desktop shortcuts for Chrome or Edge. When the browser is launched through the altered shortcut, malicious code runs in the background while normal browsing continues. To hide command-and-control activity, the attackers rely on trusted cloud platforms. Instructions for infected machines are stored in Google Sheets, while stolen data is transmitted through services such as Slack and Discord. Because these services are widely used in workplaces, the malicious traffic often blends in with routine network activity.  Once inside a network, attackers deploy monitoring tools including BackupSpy. The program scans internal drives and USB storage for specific file types such as Word documents, spreadsheets, PDFs, images, and web files. It also creates a manifest listing every file that has been collected and exfiltrated. Bitdefender describes the overall strategy as a “Distributed Denial of Detection.” Instead of relying on a single advanced tool, the attackers release large numbers of AI-generated malware samples, many of which are flawed. However, the constant stream of variants increases the likelihood that some will evade security defenses.  The campaign highlights how artificial intelligence may enable cyber groups to produce malware at scale. For defenders, the challenge is no longer limited to identifying sophisticated attacks, but also managing an ongoing flood of low-quality yet constantly evolving threats.

APT36 Uses AI-Generated “Vibeware” Malware and Google Sheets to Target Indian Government Networks #AIRisks #APT36 #APT36CyberEspionage

0 0 0 0
Preview
Anthropic officially designated a supply chain risk by Pentagon The supply chain risk designation of the artificial intelligence firm is a first for a US company.

The Pentagon has officially labeled AI firm Anthropic as a supply chain risk. How should we approach AI security? #AIrisks

https://www.bbc.com/news/articles/cn5g3z3xe65o

0 0 0 0
Preview
Verification Error 404 » tmack The incident didn’t start with a malicious line of code. It started with a recursive loop of politeness.

The incident didn’t start with a malicious line of code. It started with a recursive loop of politeness. Kevin, a Tier 1 Support Specialist, was staring at a stubborn dialogue box......

1bluebass.com/?p=365...
#AIRisks #SecureAI

0 0 0 0
Preview
Meta Oversight Board AI Protections - 우리가 주목할 5가지 이유 - 기술 덕후 한가닥 인공지능 기술이 우리 삶을 뒤바꾸는 속도는 눈이 어지러울 정도입니다. 과거의 라디오나 인터넷 혁명과 달리 현재의 AI 발전은 정부가 아닌 거대 기업들이 주도하고 있습니다. 챗봇이 청소년에게 위험한 조언을 하거나 생화학 무기 제조법을 학습할 수 있다는 경고가 나오지만 이를 검증할

Meta Oversight Board AI Protections – 우리가 주목할 5가지 이유

https://bit.ly/46CeLkR

#AIRegulation #MetaOversight #AIProtections #TechEthics #AIrisks #DigitalSafety #ArtificialIntelligence

0 0 0 0
Preview
Musk: "No Suicides From Grok" – OpenAI Safety Clash Elon Musk blasts OpenAI's safety failures in a deposition, declaring "nobody committed suicide because of Grok." Discover how this fuels his lawsuit and exposes AI risks.

Musk: “No Suicides from Grok” – OpenAI Safety Clash
#ElonMusk #AIrisks #OpenAI #Grok #AISafety
www.squaredtech.co/musk-grok-su...

0 0 0 0

In traditional IT, we want uptime. In an Agentic AI world, we must prioritize containment. If an agent's network usage spikes unpredictably, it is better to "go dark" (Isolate) than to allow that traffic to hit a core router and trigger a global BGP reset.

#AIRisks #SecureAI

0 0 0 0
wsj.com

Is AI's potential threat enough to shake markets? Citrini Research's memo has traders rethinking their strategies. What’s your take? #AIrisks

www.wsj.com/tech/ai/breaking-down-th...

1 0 0 0
Preview
The Potential for AI to Manipulate Elections

When discussing the role of artificial intelligence in public discourse, I focus on the growing risk that advanced tools could be used to influence electoral processes.
Read it here: solihullpublishing.com/blog/f/the-p...
#AIandDemocracy #ElectionIntegrity #AIrisks #DigitalEthics

1 0 0 0
Preview
The Potential for AI to Manipulate Elections

When discussing the role of artificial intelligence in public discourse, I focus on the growing risk that advanced tools could be used to influence electoral processes.
Read it here: solihullpublishing.com/blog/f/the-p...
#AIandDemocracy #ElectionIntegrity #AIrisks #DigitalEthics

1 0 0 0
Preview
State CISO nominee to review 3,200 connected apps, flags emerging AI risks Acting State Chief Information Security Officer James Sanders told the nominations committee he will begin a review of roughly 3,200 applications connected to the state's Google Workspace and implement identity-platform migration and additional safeguards to limit data access.

Maryland's new CISO nominee, James Sanders, is ready to tackle cybersecurity by reviewing 3,200 apps and addressing AI risks—are we prepared for the challenges ahead?

Click to read more!

#MD #IdentityControls #CitizenPortal #DataSecurity #AIRisks #MarylandCybersecurity

0 0 0 0
Preview
AI in Healthcare: Navigating the Risks with Expert Guidance - Health IT Consult AI in healthcare is not a question of if — it's a question of how. The organizations that thrive will be those that embrace innovation with expert guidance by their side.

⚠️ Did you know AI in healthcare comes with serious risks most organizations aren't prepared for?

Learn how 👉 healthitconsult.com/ai-in-health...

#AIRisks #HealthcareAI #PatientSafety #HIPAACompliance #ClinicalAI #HealthIT #AIGovernance #HealthcareCompliance

0 0 0 0
Preview
China Raises Security Concerns Over Rapidly Growing OpenClaw AI Tool  A fresh alert from China’s tech regulators highlights concerns around OpenClaw, an open-source AI tool gaining traction fast. Though built with collaboration in mind, its setup flaws might expose systems to intrusion. Missteps during installation may lead to unintended access by outside actors. Security gaps, if left unchecked, can result in sensitive information slipping out. Officials stress careful handling - especially among firms rolling it out at scale. Attention to detail becomes critical once deployment begins. Oversight now could prevent incidents later. Vigilance matters most where automation meets live data flows.  OpenClaw operations were found lacking proper safeguards, officials reported. Some setups used configurations so minimal they risked exposure when linked to open networks. Though no outright prohibition followed, stress landed on tighter controls and stronger protection layers. Oversight must improve, inspectors noted - security cannot stay this fragile.  Despite known risks, many groups still overlook basic checks on outward networks tied to OpenClaw setups. Security teams should verify user identities more thoroughly while limiting who gets in - especially where systems meet the internet. When left unchecked, even helpful open models might hand opportunities to those probing for weaknesses.  Since launching in November, OpenClaw has seen remarkable momentum. Within weeks, it captured interest across continents - driven by strong community engagement. Over 100,000 GitHub stars appeared fast, evidence of widespread developer curiosity. In just seven days, nearly two million people visited its page, Steinberger noted. Because of how swiftly teams began using it, comparisons to leading AI tools emerged often. Recently, few agent frameworks have sparked such consistent conversation.  Not stopping at global interest, attention within Chinese tech circles grew fast. Because of rising need, leading cloud platforms began introducing setups for remote OpenClaw operation instead of local device use. Alibaba Cloud, Tencent Cloud, and Baidu now provide specialized access points. At these spots online, users find rented servers built to handle the processing load of the AI tool. Unexpectedly, the ministry issued a caution just as OpenClaw’s reach began stretching past coders into broader networks.  A fresh social hub named Moltbook appeared earlier this week - pitched as an online enclave solely for OpenClaw bots - and quickly drew notice. Soon afterward, flaws emerged: Wiz, a security analyst group, revealed a major defect on the site that laid bare confidential details from many members. While excitement built around innovation, risks surfaced quietly beneath.  Unexpectedly, the incident revealed deeper vulnerabilities tied to fast-growing AI systems built without thorough safety checks. When open-source artificial intelligence grows stronger and easier to use, officials warn that small setup errors might lead to massive leaks of private information.  Security specialists now stress how fragile these platforms can be if left poorly managed. With China's newest guidance, attention shifts toward stronger oversight of artificial intelligence safeguards. Though OpenClaw continues to operate across sectors, regulators stress accountability - firms using these tools must manage setup carefully, watch performance closely, while defending against new digital risks emerging over time.

China Raises Security Concerns Over Rapidly Growing OpenClaw AI Tool #AIcybersecurity #AIRisks #AISecurity

0 0 0 0
Preview
What Are Large Language Models (LLMs) and How Do They Work Learn what Large Language Models (LLMs) are, how they work, and how they power AI tools like chatbots, search, and content generation.

Learn what Large Language Models (LLMs) are, how they work, and how they power AI tools like chatbots, search, and content generation.

blog.applabx.com/what-are-lar...

#LargeLanguageModels, #LLMs, #AIChallenges, #AIrisks, #ArtificialIntelligence, #GenerativeAI,

0 0 0 0
A man with short black hair, wearing glasses and a navy suit jacket, poses against a cream background. He is looking directly at the camera with a neutral expression.

A man with short black hair, wearing glasses and a navy suit jacket, poses against a cream background. He is looking directly at the camera with a neutral expression.

AI agents are fast, adaptive, and dangerously unpredictable. The cold dread of watching one misfire is a warning for every enterprise leader: spr.ly/63324hrc6k

Read what Hari Om Garg reveals in this must‑read analysis.

#FoundryExpert #AIrisks #EnterpriseTech

0 0 0 0
Preview
Replacing humans with machines is leaving truckloads of food stranded and unusable | The-14 Automation in food supply chains is creating hidden risks. When digital systems fail, truckloads of food sit stranded, exposing fragile infrastructure gaps.

Replacing humans with machines is leaving truckloads of food stranded and unusable
#Tech #AI #FoodSecurity #SupplyChain #AIRisks #Logistics #CyberSecurity #Automation #Resilience #Agriculture #Food #TechAccountability #SystemFailure #Data #Farming
the-14.com/replacing-hu...

0 0 0 0
Preview
Open-Source AI Models Pose Growing Security Risks, Researchers Warn Hackers and other criminals can easily hijack computers running open-source large language models and use them for illicit activity, bypassing the safeguards built into major artificial intelligence platforms, researchers said on Thursday. The findings are based on a 293-day study conducted jointly by SentinelOne and Censys, and shared exclusively with Reuters.  The research examined thousands of publicly accessible deployments of open-source LLMs and highlighted a broad range of potentially abusive use cases. According to the researchers, compromised systems could be directed to generate spam, phishing content, or disinformation while evading the security controls enforced by large AI providers.  The deployments were also linked to activity involving hacking, hate speech, harassment, violent or graphic content, personal data theft, scams, fraud, and in some cases, child sexual abuse material. While thousands of open-source LLM variants are available, a significant share of internet-accessible deployments were based on Meta’s Llama models, Google DeepMind’s Gemma, and other widely used systems, the researchers said.  They identified hundreds of instances in which safety guardrails had been deliberately removed. “AI industry conversations about security controls are ignoring this kind of surplus capacity that is clearly being utilized for all kinds of different stuff, some of it legitimate, some obviously criminal,” said Juan Andres Guerrero-Saade, executive director for intelligence and security research at SentinelOne. He compared the problem to an iceberg that remains largely unaccounted for across the industry and the open-source community.  The study focused on models deployed using Ollama, a tool that allows users to run their own versions of large language models. Researchers were able to observe system prompts in about a quarter of the deployments analyzed and found that 7.5 percent of those prompts could potentially enable harmful behavior.  Geographically, around 30 per cent of the observed hosts were located in China, with about 20 per cent based in the United States, the researchers said. Rachel Adams, chief executive of the Global Centre on AI Governance, said responsibility for downstream misuse becomes shared once open models are released.  “Labs are not responsible for every downstream misuse, but they retain an important duty of care to anticipate foreseeable harms, document risks, and provide mitigation tooling and guidance,” Adams said.   A Meta spokesperson declined to comment on developer responsibility for downstream abuse but pointed to the company’s Llama Protection tools and Responsible Use Guide. Microsoft AI Red Team Lead Ram Shankar Siva Kumar said Microsoft believes open-source models play an important role but acknowledged the risks.  “We are clear-eyed that open models, like all transformative technologies, can be misused by adversaries if released without appropriate safeguards,” he said.  Microsoft conducts pre-release evaluations and monitors for emerging misuse patterns, Kumar added, noting that “responsible open innovation requires shared commitment across creators, deployers, researchers, and security teams.”  Ollama, Google and Anthropic did not comment. 

Open-Source AI Models Pose Growing Security Risks, Researchers Warn #AIRisks #OpenSourceAIModels #Technology

0 0 0 0
Preview
He Called AI a 'GOD-LIKE TEENAGER': Anthropic CEO’s 2026 Warning Is humanity ready to parent a god-like teenager? 🍼🤖 We just watched the explosive NBC News interview with Anthropic CEO Dario Amodei, and if you aren’t paying attention to what happens between now and 2026, you need to listen to this episode immediately. In this deep-dive reaction, we break down Amodei’s chilling yet hopeful warning: that Artificial Intelligence is currently like a powerful, unpredictable "teenager"—brilliant, capable of massive destruction (or creation), but lacking the maturity to know right from wrong. Amodei warns we are sprinting toward a critical threshold of risk by 2026, where the "adults in the room" (regulators and transparent researchers) must step in before it's too late. In this episode, we uncover: - The "Teenager" Analogy: Why Amodei believes current models possess immense power without the necessary social safeguards. - The 2026 Deadline: Why the next few years are the "danger zone" for autonomous weapons and biosecurity threats. - Profit vs. Safety: The controversial push for companies to publish their "danger research" instead of hiding it. - The Medical Miracle: The cautious hope that AI could cure diseases and solve scientific mysteries—if we survive the adolescence phase. Are we raising a Nobel Prize winner or a delinquent? Join us as we dissect the most important interview of the year and ask the hard question: Can we align AI motivations with human values before the teenager moves out of the house? 👇 Hit play to understand the future before it arrives.

📣 New Podcast! "He Called AI a 'GOD-LIKE TEENAGER': Anthropic CEO’s 2026 Warning" on @Spreaker #agi #ai2026 #airisks #aisafety #anthropic #artificialintelligence #claudeai #darioamodei #digitaltrends #futureoftech #generativeai #humanity #innovation #machinelearning #nbcnews #siliconvalley

0 0 0 0
Post image

🦀 Moltbot comes with serious security risks you shouldn’t ignore.

👉 Full blog here: realancer.net/blogs/reason...

#AI #ArtificialIntelligence #CyberSecurity #Moltbot #Clawdbot #AIThreats #AIPrivacy #DataSecurity #AIrisks #TechNews #OpenSource #Hacking #FutureOfAI #AlitechSolutions #Realancer

2 0 0 0

The Doomsday Clock is now 85 seconds to midnight — the closest ever. Nuclear threats, AI risks, biological hazards, and climate disaster are all converging. We’re closer to catastrophe than ever, and urgent action is needed.
#DoomsdayClock #ClimateCrisis #NuclearThreat #AIrisks #85SecondsToMidnight

0 1 0 0
Preview
‘Humanity needs to wake up’ to dangers of AI, says Anthropic chief Dario Amodei posts 20,000-word essay detailing potentially catastrophic risks from powerful technology in years to come

Are we ignoring the risks of powerful AI? Anthropic's leader warns it's time to wake up! What do you think? #AIrisks

www.ft.com/content/c3098552-7204-4a...

0 0 0 0