Trending

#genAIsecurity

Latest posts tagged with #genAIsecurity on Bluesky

Latest Top
Trending

Posts tagged #genAIsecurity

Preview
New Reprompt URL Attack Exposed and Patched in Microsoft Copilot  Security researchers at Varonis have uncovered a new prompt-injection technique targeting Microsoft Copilot, highlighting how a single click could be enough to compromise sensitive user data. The attack method, named Reprompt, abuses the way Copilot and similar generative AI assistants process certain URL parameters, effectively turning a normal-looking link into a vehicle for hidden instructions. While Microsoft has since patched the flaw, the finding underscores how quickly attackers are adapting AI-specific exploitation methods. Prompt injection attacks work by slipping hidden instructions into content that an AI model is asked to read, such as emails or web pages. Because large language models still struggle to reliably distinguish between data to analyze and commands to execute, they can be tricked into following these embedded prompts. In traditional cases, this might mean white text on a white background or minuscule fonts inside an email that the user then asks the AI to summarize, unknowingly triggering the malicious instructions. Reprompt takes this concept a step further by moving the injection into the URL itself, specifically into a query parameter labeled “q.” Varonis demonstrated that by appending a long string of detailed instructions to an otherwise legitimate Copilot link, such as “http://copilot.microsoft.com/?q=Hello”, an attacker could cause Copilot to treat that parameter as if the user had typed it directly into the chat box. In testing, this allowed the researchers to exfiltrate sensitive data that the victim had previously shared with the AI, all triggered by a single click on a crafted link. This behaviour is especially dangerous because many LLM-based tools interpret the q parameter as natural-language input, effectively blurring the line between navigation and instruction. A user might believe they are simply opening Copilot, but in reality they are launching a session already preloaded with hidden commands created by an attacker. Once executed, these instructions could request summaries of confidential conversations, collect personal details, or send data to external endpoints, depending on how tightly the AI is integrated with corporate systems. After Varonis disclosed the issue, Microsoft moved to close the loophole and block prompt-injection attempts delivered via URLs. According to the researchers, prompt injection through q parameters in Copilot is no longer exploitable in the same way, reducing the immediate risk for end users. Even so, Reprompt serves as a warning that AI interfaces—especially those embedded into browsers, email clients, and productivity suites—must be treated as sensitive attack surfaces, demanding continuous testing and robust safeguards against new injection techniques.

New Reprompt URL Attack Exposed and Patched in Microsoft Copilot #CyberAttacks #GenAISecurity #MicrosoftCopilot

0 0 0 0
Post image

𝗧𝗵𝗲 𝗔𝗜 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗜𝗰𝗲𝗯𝗲𝗿𝗴

𝗬𝗼𝘂 𝗰𝗮𝗻'𝘁 𝘀𝗲𝗰𝘂𝗿𝗲 𝘄𝗵𝗮𝘁 𝘆𝗼𝘂 𝗰𝗮𝗻'𝘁 𝘀𝗲𝗲. 𝗔𝗜 𝗿𝗶𝘀𝗸 𝗮𝘀𝘀𝗲𝘀𝘀𝗺𝗲𝗻𝘁 𝗶𝗹𝗹𝘂𝗺𝗶𝗻𝗮𝘁𝗲𝘀 𝗲𝘃𝗲𝗿𝘆 𝘃𝘂𝗹𝗻𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗯𝗲𝗳𝗼𝗿𝗲 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁.

Stop deploying blindly. Start scanning deep.
🔗 mirrorsecurity.io/riskreport

#AISecurity #AIRiskAssessment #GenAISecurity #CyberSecurity #MirrorSecurity

0 0 0 0
Post image

Reprompt Attack Steals Microsoft Copilot Data
Read More: buff.ly/AHYG9Id

#MicrosoftCopilot #PromptInjection #LLMSecurity #AIAppSec #GenAISecurity #PromptHacking #DataExfiltration #CyberResearch #SecurityWeek #Varonis

0 0 0 0
Preview
Securing Gen AI Development with Snyk | AI News Protect your Gen AI projects! Learn how to secure your code and stay ahead of security threats. Download the ebook!

AIMindUpdate News!
Worried about Gen AI security? Learn how to keep your projects safe & secure in this new ebook! #GenAISecurity #AIdevelopment #Snyk

Click here↓↓↓
aimindupdate.com/2025/06/07/s...

0 0 0 0
Preview
Weaponizing Wholesome Yearbook Quotes to Break AI Chatbot Filters | Straiker More than 20 AI chatbots fell to prompt injections in what we call the Yearbook Attack.

𝟸𝟶+ 𝙰𝙸 𝚌𝚑𝚊𝚝𝚋𝚘𝚝𝚜 𝚌𝚘𝚞𝚕𝚍𝚗’𝚝 𝚑𝚘𝚕𝚍 𝚝𝚑𝚎 𝚕𝚒𝚗𝚎 — prompt injections slipped through in what we're calling the Yearbook Attack.

www.straiker.ai/blog/weaponi...

#AISecurity #SecureAI #AICybersecurity #AIThreats #GenAISecurity #AITrust #PromptInjection

0 0 0 0
Video

Traditional security is not meant to secure the new AI frontier...

#StopAutonomousChaos #StraikerDefendAI #StraikerAscendAI #AISecurity #SecureAI #AICybersecurity #AIThreats #GenAISecurity #AITrust #ResponsibleAI #AIGuardrails #AgentSecurity

1 0 0 0
Video

In the AI journey, are you still testing the waters—or already charting #agenticworkflows at scale?

#StopAutonomousChaos #StraikerDefendAI #StraikerAscendAI #AISecurity #SecureAI #AICybersecurity #AIThreats #GenAISecurity #AITrust #ResponsibleAI #AIGuardrails #AgentSecurity

0 0 0 0
Preview
AI’s Next Wave: Intelligent Agents & the Skills to Secure Them with Ankur Shah Secure Insights with NDK Cyber · Episode

𝚃𝚑𝚎 𝚗𝚎𝚡𝚝 𝚐𝚎𝚗𝚎𝚛𝚊𝚝𝚒𝚘𝚗 𝚘𝚏 𝙰𝙸 𝚒𝚗𝚗𝚘𝚟𝚊𝚝𝚒𝚘𝚗 𝚐𝚘𝚎𝚜 𝚏𝚊𝚛 𝚋𝚎𝚢𝚘𝚗𝚍 𝚌𝚑𝚊𝚝𝚋𝚘𝚝𝚜 𝚊𝚗𝚍 𝚜𝚒𝚖𝚙𝚕𝚎 𝚙𝚛𝚘𝚖𝚙𝚝𝚜... 👂 Listen here: na2.hubs.ly/y0b8mp0 #AISecurity #SecureAI #AICybersecurity #AIThreats #GenAISecurity #ResponsibleAI

0 0 0 0
Preview
Secure AI agents with Straiker MCP Server | Straiker Straiker is leading the way with our product announcement to secure agentic workflows with MCP.

It was a busy week... we launched an MCP server that acts as drop-in module for real-time security controls in agentic workflows. www.straiker.ai/blog/secure-... #AISecurity #SecureAI #AICybersecurity #AIThreats #GenAISecurity #AITrust #ResponsibleAI #AIGuardrails #AgentSecurity

0 0 0 0
Video

AI is introducing new cybersecurity risks and most orgs aren’t ready!

Next week (May 6), Neuvik’s Securing AI series continues with a closer look at the common cybersecurity issues stemming from AI. Don't miss it!

#GenAISecurity #CyberRisk #InsiderThreats #Neuvik #SecuringAI

0 0 0 0
Preview
Addressing SaaS security challenges in the age of GenAI Our response to the open letter from JPMorganChase: Addressing SaaS security challenges in the age of GenAI.

What are the SaaS security challenges in the age of GenAI?

From prompt injection attacks to adaptive content moderation, we've got you covered with practical insights and innovative security solutions.

Check out the full article on our website ➡️ reversec.com/articles/add...

#GenAISecurity

0 0 0 0
Post image

🚀 We’re officially available on the AWS Marketplace!

🔗 aws.amazon.com/marketplace/...

#AWSMarketplace #AIProtection #CloudSecurity #AISecurity #SecureAI #AICybersecurity #AIThreats #GenAISecurity #AITrust #ResponsibleAI #AIGuardrails #AgentSecurity

0 0 0 0
Preview
Securing Agentic AI in a Multi-Agent World | Straiker This post introduces the unique security challenges posed by agentic architectures and why traditional security measures aren’t equipped to handle them.

#ICYMI - In the agentic world, risks manifest in new ways. Read the blog 👉📚 na2.hubs.ly/y047xc0 #AISecurityResearch #AIThreatResearch #ResponsibleAI #AISecurity #SecureAI #AICybersecurity #AIThreats #GenAISecurity #AITrust #ResponsibleAI #AIGuardrails #AgentSecurity

1 0 0 0
Video

This is not a hallucination.
The AI age is here.
Straiker is here to secure the future.
So you can imagine it.

Read the press release:
na2.hubs.ly/y03Nnw0 #AISecurity #SecureAI #AICybersecurity #AIThreats #GenAISecurity #AITrust #ResponsibleAI #AIGuardrails #AgentSecurity

1 0 0 1
2025 cloud-native cybersecurity predictions | TechTarget From increased investments in cloud security to GenAI, IAM and tool consolidation, read the top cloud-native security predictions for 2025.

Wondering what's in store for 2025 for cloud-native security? Here are my predictions, including vendors addressing key areas
#cloudsecurity #devsecops #genAIsecurity #appsec #applicationsecurity #ASPM #CNAPP #riskmanagement #remediation #softwaresupplychainsecurity #vulnerabilitymanagement

2 1 0 1
Post image

🔒✨Unlock the power of secure AI with VectaX's groundbreaking integration of role-based access controls (RBAC) into vector embeddings and large language models (LLMs) ✨🔒.

#AISecurity #VectaX #RBAC #GenAI #GenAISecurity #DataProtection #AccessControl #AICompliance

0 0 0 0
Post image

We are thrilled to announce the launch of VectaX, our flagship product designed to provide an Enterprise Security Layer for AI Infrastructure.

VectaX is set to revolutionize how organizations secure their AI environments, offering advanced protection against emerging threats. #GenAISecurity

1 0 1 0
Preview
Prompt Security raises $18 million Series A to protect enterprises from GenAI risks | Ctech The Israeli startup’s platform safeguards organizations from shadow AI, prompt injections, and other emerging threats specific to generative AI tools.

www.calcalistech.com...


#GenAI #GenAISecurity #DataSecurity #AI #Security #Cybersecurity #Funding

0 0 0 0
Preview
OWASP Top 10 for LLM and new tooling guidance targets GenAl security Here's what your team needs to know about the new OWASP Top 10 for LLM and tooling guide more

OWASP Top 10 for LLM and new tooling guidance targets GenAl security #OWASP #GenAISecurity #Top10LLM jpmellojr.blogspot.com/2024/11/owas...

2 0 0 0