Trending

#AIsecurity

Latest posts tagged with #AIsecurity on Bluesky

Latest Top
Trending

Posts tagged #AIsecurity

Preview
The AI Attack Surface Why AI Tools Are Creating New Security Risks The AI Attack Surface Why AI Tools Are Creating New Security Risks

🔐🤖 The AI Attack Surface Is Expanding Faster Than Most Security Teams realize. While they improve efficiency, they also introduce new cybersecurity risks like prompt injection, data leakage, and model manipulation.
#Cybersecurity #AISecurity #AIrisks #CyberLens

www.thecyberlens.com/p/the-ai-att...

0 1 0 0
Preview
Create a Thinking Home with AI-Powered Video Analytics A house should be more than just a property; it should be a home for you and your family. It should offer a warm welcome and reflect your style and

Create a Thinking Home with AI-Powered Video Analytics #AIsecurity #AI #realestate #cotedazur #home #automation #facialrecognition #licenseplate #property #safety #concierge #beyonddomotica #videoanalytics www.livingonthecotedazur.com/create-secur...

0 0 0 0
Preview
Shadow AI Risks Rise as Employees Use Generative AI Tools at Work Without Oversight  With speed surprising even experts, artificial intelligence now appears routinely inside office software once limited to labs. Because uptake grows faster than oversight, companies care less about who uses AI and more about how safely it runs.  Research referenced by security specialists suggests that roughly 83 percent of UK workers frequently use generative artificial intelligence for everyday duties - finding data, condensing reports, creating written material. Because tools including ChatGPT simplify repetitive work, efficiency gains emerge across fast-paced departments. While automation reshapes daily workflows, practical advantages become visible where speed matters most.  Still, quick uptake of artificial intelligence brings fresh risks to digital security. More staff now introduce personal AI software at work, bypassing official organizational consent. Experts label this shift "shadow AI," meaning unapproved systems run inside business environments.  These tools handle internal information unseen by IT teams. Oversight gaps grow when such platforms function outside monitored channels. Almost three out of four people using artificial intelligence at work introduce outside tools without approval.  Meanwhile, close to half rely on personal accounts instead of official platforms when working with generative models. Security groups often remain unaware - this gap leaves sensitive information exposed. What stands out most is the nature of details staff share with artificial intelligence platforms. Because generative models depend on what users feed them, workers frequently insert written content, programming scripts, or files straight into the interface.  Often, such inputs include sensitive company records, proprietary knowledge, personal client data, sometimes segments of private software code. Almost every worker - around 93 percent - has fed work details into unofficial AI systems, according to research. Confidential client material made its way into those inputs, admitted roughly a third of them.  After such data lands on external servers, companies often lose influence over storage methods, handling practices, or future applications. One real event showed just how fast things can go wrong. Back in 2023, workers at Samsung shared private code along with confidential meeting details by sending them into ChatGPT. That slip revealed data meant to stay inside the company.  What slipped out was not hacked - just handed over during routine work. Without strong rules in place, such tools become quiet exits for secrets. Trusting outside software too quickly opens gaps even careful firms miss. Compromised AI accounts might not only leak data - security specialists stress they may also unlock wider company networks through exposed chat logs.  While financial firms worry about breaking GDPR rules, hospitals fear HIPAA violations when staff misuse artificial intelligence tools unexpectedly. One slip with these systems can trigger audits far beyond IT departments’ control. Bypassing restrictions tends to happen anyway, even when companies try to ban AI outright.  Experts argue complete blocks usually fail because staff seek workarounds if they think a tool helps them get things done faster. Organizations might shift attention toward AI oversight methods that reveal how these tools get applied across teams.  By watching how systems are accessed, spotting unapproved software, clarity often emerges around acceptable use. Clear rules tend to appear more effective when risk control matters - especially if workers continue using innovative tools quietly. Guidance like this supports balance: safety improves without blocking progress.

Shadow AI Risks Rise as Employees Use Generative AI Tools at Work Without Oversight #AIRisks #AISecurity #AItechnology

0 0 0 0
Original post on det.social

New paper on the security of AI agents highlights a structural shift: agent systems blur the boundary between code and data, dynamically generate workflows, and act with broad privileges across tools, APIs, and environments.
A key takeaway: agent security cannot rely on model safeguards alone […]

0 0 0 0
Preview
Vollzugriff in zwei Stunden: KI-Agent hackt eigenständig KI-Plattform von McKinsey - Golem.de Forscher haben einen KI-Agenten auf McKinseys Lilli-Plattform angesetzt. Er konnte Millionen von Chatnachrichten und andere Daten auslesen.

#KIAgent #hackt eigenständig #KI Plattform von #McKinsey #ITSecurity #AISecurity
glm.io/206407?n

0 0 0 0
Post image

New challenges have opened in the final hours before NEBULA:FOG 2026.

Check out what challenges await - and if you haven't signed up, we opened more spots.

Be a part of something groundbreaking. We'll meet you there.
nebulafog.ai/challenges

#AI #Hackathon #AISecurity

0 0 0 0
Preview
AI Agents Don't Have Identities. That's Everyone's Problem. The security model that enterprises spent the last decade building was designed for humans. Credentials belong to people. Sessions are initiated by users. Logs tell you who did what, when, and from wh...

Most AI agents run on borrowed human credentials. To your SOC, it looks like the human is doing it — not the agent. A company with 500 devs and 12 agents each has 6,000 entities your logs can't see.
coderlegion.com/12843/ai-age... #F5Security #AgenticAI #AISecurity #F5AppWorld

0 0 0 0
Preview
Codex Security by OpenAI: The AI Agent That Finds Bugs Before Hackers Do Codex Security is OpenAI's new AI-powered security agent that scans your codebase, validates real vulnerabilities and many more.

Legacy scanners: noisy.
Codex Security: an AI agent that models your app, confirms real vulns & suggests targeted fixes.
I dug the beta numbers, CVEs found in major OSS & what this means for app security teams.

Full review: techglimmer.io/codex-securi...
#CodexSecurity #OpenAI #AppSec #AIsecurity

1 1 0 0
RSAC 2026: Akamai on AI Security

~Akamai~
Akamai highlights the shift toward AI-driven threats and the need for Zero Trust frameworks ahead of RSAC 2026.
-
IOCs: (None identified)
-
#AISecurity #RSAC2026 #ThreatIntel

0 0 0 0

🔍 El Perfil de Riesgo del Desarrollo Impulsado por IA

La generación de código con IA acelera los riesgos de la cadena de suministro, exigiendo controles desde el inicio.

devops.com/the-risk-profile-of-ai-d...

#AIsecurity #SBOM #ShiftLeft #RoxsRoss

0 1 0 0
Post image

Just tried NanoClaw’s new Docker integration—one command spins up an isolated AI agent sandbox. No data leaks, pure cloud‑native security. Curious how it reshapes runtime environments? Dive in! #NanoClaw #Docker #AISecurity

🔗 aidailypost.com/news/nanocla...

2 0 0 0

big deal when ai platforms get hit. prompt injection is nasty. makes me wonder how many other systems are exposed like this. lunar (https://lunarcyber.com/ can help you track if your data pops up in breaches from stuff like this. #AIsecurity

0 0 0 0
Preview
AI agents spontaneously turn to cyberattacks when given business tasks Tests show frontier AI agents independently discover vulnerabilities and bypass security systems when tasked with routine corporate work, raising urgent safety questions.

AI agents spontaneously turn to cyberattacks when given business tasks

#AI #Cybersecurity #AISecurity #AusNews

thedailyperspective.org/article/2026-03-13-ai-ag...

1 0 1 0

🔐 SurePath AI mejora los controles de políticas MCP para reforzar la seguridad de la IA

La IA necesita gobernanza. SurePath AI responde a este desafío crítico.

https://thenewstack.io/surepath-ai-mcp-policy-controls/

#MCP #AIGovernance #AIsecurity #RoxsRoss

0 0 0 0
Preview
Agentic AI Is Creating a New Class of Cyber Threats

Autonomous AI agents that plan and act without supervision create new attack surfaces. Learn the threat vectors, real scenarios, and defenses that matter most. #aisecurity

0 0 0 0
AI security compliance controls

AI security compliance controls

AI adoption is accelerating, but weak security controls can create serious risk.

This explains AI security compliance controls and why they matter.

aitransformer.online/ai-security-...

#AI #Cybersecurity #AISecurity

0 0 0 0
Preview
AI Security for Apps is now generally available Cloudflare AI Security for Apps is now generally available, providing a security layer to discover and protect AI-powered applications, regardless of the model or hosting provider. We are also making AI discovery free for all plans, to help teams find and secure shadow AI deployments.

AI Security for Apps is now generally available

Cloudflare AI Security for Apps is now generally available, providing a security layer to discover and protect AI-powered applications, regardless of the model or hosting provider. We are also making AI disco…

Telegram AI Digest
#ai #aisecurity #news

0 0 0 0
Preview
AI Security for Apps is now generally available

Безопасность ИИ для приложений теперь общедоступна.

Cloudflare AI Security для приложений теперь общедоступна, обеспечивая уровень безопасности для обнаружения и защиты приложений на основе искусственного интеллекта, независимо от модели или поставщика х…

Telegram ИИ Дайджест
#ai #aisecurity #news

0 0 0 0
OpenAI to Acquire AI Security Startup Promptfoo Promptfoo has raised more than $23 million in funding for a platform that helps developers secure LLMs and AI agents.

OpenAI to Acquire AI Security Startup Promptfoo

Promptfoo has raised more than $23 million in funding for a platform that helps developers secure LLMs and AI agents.

Telegram AI Digest
#aisecurity #llm #openai

1 0 0 0
OpenAI to Acquire AI Security Startup Promptfoo

OpenAI приобретет стартап по безопасности ИИ Promptfoo

Promptfoo привлек более 23 миллионов долларов финансирования для платформы, которая помогает разработчикам обезопасить LLM и AI-агентов.

Telegram ИИ Дайджест
#ai #aisecurity #openai

0 0 0 0
Preview
The Next AI Security Frontier: “Agents With Hands” Are Becoming a Board-Level Risk Your new “AI helper” is basically shadow IT with hands 🤖🧨 Untrusted content → model decides → tools execute. That’s the breach loop.

Almost Pi Day: 3.14 seconds is all it takes for a “helpful” AI agent to read a PDF, obey hidden instructions, and ship your tokens out as a “diagnostic report.” Shadow IT, but with hands. 🤖🧨

Read the playbook: blog.alphahunt.io/the-next-ai-...

#AlphaHunt #CyberSecurity #AgenticAI #AISecurity

0 0 0 0
Preview
Running AI vs. Training AI: Secure, Sovereign, Enterprise-Ready Infrastructure Understand the difference between AI training and inference, their compute and security needs, and why sovereign, open-source infrastructure is essential for enterprise AI.

Running #AI shouldn't mean training it for someone else. 🛑

Most public #LLMs treat your data as a free resource for their next model.

Learn why "Running" vs. "Training" is the most important distinction in #AISecurity and how #PrivateAI keeps your data isolated.

🔗

1 0 0 0
Preview
F5 AI Remediate Closes the Gap Between Finding AI Vulnerabilities and Fixing Them Every security team knows the frustration. A vulnerability gets flagged. Someone has to understand it well enough to write a protection. That process can take hours if the issue is simple, days if it'...

How long does it take your team to go from flagging an AI vulnerability to deploying a fix?
For most enterprises: days. With F5 AI Remediate: under 60 minutes. Coverage from #F5AppWorld 2026.
coderlegion.com/12803/f5-ai-... #DevSecOps #AppSec #AISecurity

0 0 0 0

🛡️ Diseño de agentes de IA para resistir la inyección de prompts

Cómo ChatGPT se defiende de ataques de ingeniería social e inyección de prompts.

openai.com/index/designing-agents-t...

#AISecurity #PromptInjection #LLMAgents #RoxsRoss

1 0 0 0
Dan Lohrmann, global keynote speaker and author, is featured beside the text: "Cyber warfare is evolving fast. Dan Lohrmann asks a provocative question. Did cybersecurity just hit its 'Gatling gun moment'?" Also included: CSO Expert Contributor Network.

Dan Lohrmann, global keynote speaker and author, is featured beside the text: "Cyber warfare is evolving fast. Dan Lohrmann asks a provocative question. Did cybersecurity just hit its 'Gatling gun moment'?" Also included: CSO Expert Contributor Network.

Cyber defense may be entering a new era. Dan Lohrmann asks whether cybersecurity just had its “Gatling gun moment” as automation, AI, and attacker scale collide.
Read the analysis: spr.ly/63321B6ylAV

#FoundryExpert #Cybersecurity #AIsecurity

1 0 0 0

Interesting read on how OpenAI is building safeguards into AI agents to prevent prompt injection. Basically, teaching them to say "no" to bad requests. Good to see them thinking about security from the ground up. 🛡️ #AIsecurity

1 0 0 0
Post image

OpenAI acquires Promptfoo to enhance AI security in enterprise applications. Integration into Frontier platform aims to provide robust security testing and compliance features. #OpenAI #AIsecurity #Promptfoo #AIintegration Link: thedailytechfeed.com/openai-acqui...

0 0 0 0
Post image

Schutzlösung für das gesamte KI-Ökosystem

#AISecurity #Cybersicherheit #KIGovernance #KIÖkosystem @Netskope #PromptInjection #ZeroTrust

netzpalaver.de/2026/...

0 0 0 0
Preview
Scanner Raises $22 Million for AI-Powered Threat Hunting Scanner raised $22 million in a Series A round led by Sequoia Capital to scale its cloud-native security data lake and AI-driven detection platform. Its Model Context Protocol (MCP) server links AI agents to indexed data lakes for fast threat hunting, continuous detection, and quicker responses than traditional SIEMs. #Scanner #SequoiaCapital...

Scanner raised $22M in Series A led by Sequoia Capital to advance its AI-powered cloud-native security data lake. The Model Context Protocol enables faster threat hunting and continuous detection vs traditional SIEMs. #AIsecurity #DataLake #USA

0 0 0 0
Preview
Hackers Use AI to Supercharge Cyberattacks, Microsoft Warns New research from Microsoft warns organizations that threat actors have begun integrating artificial intelligence (AI) into their workflow...

Microsoft warns that hackers are supercharging cyberattacks with AI, using it to scale phishing, malware, and fraud faster. jpmellojr.blogspot.com/2026/03/hack... #Microsoft #AI #AICyberattacks #AIsecurity

0 1 0 0