Massimo Bonanni's Avatar

Massimo Bonanni

@massimobonanni

"Paranormal Trainer, with the head in the Cloud and all the REST in microservices!" (cit.)

63
Followers
68
Following
1,850
Posts
10.11.2024
Joined
Posts Following

Latest posts by Massimo Bonanni @massimobonanni

Preview
Secret scanning pattern updates — March 2026 GitHub secret scanning continually updates its detectors, validators, and analyzers. Here’s what’s new for March 2026. 28 new secret detectors from 15 providers, including Lark, Vercel, Snowflake, and Supabase. 39… The post Secret scanning pattern updates — March 2026 appeared first on The GitHub Blog.

Secret scanning pattern updates — March 2026

10.03.2026 21:43 👍 0 🔁 0 💬 0 📌 0
Preview
The era of “AI as text” is over. Execution is the new interface. AI is shifting from prompt-response interactions to programmable execution. See how the GitHub Copilot SDK enables agentic workflows directly inside your applications. The post The era of “AI as text” is over. Execution is the new interface. appeared first on The GitHub Blog.

The era of “AI as text” is over. Execution is the new interface.

10.03.2026 21:18 👍 0 🔁 0 💬 0 📌 0
Preview
.NET 11 Preview 2 is now available! Find out about the new features in .NET 11 Preview 2 across the .NET runtime, SDK, libraries, ASP.NET Core, Blazor, C#, .NET MAUI, and more! The post .NET 11 Preview 2 is now available! appeared first on .NET Blog.

.NET 11 Preview 2 is now available!

10.03.2026 20:46 👍 0 🔁 0 💬 0 📌 0
Preview
.NET and .NET Framework March 2026 servicing releases updates A recap of the latest servicing updates for .NET and .NET Framework for March 2026. The post .NET and .NET Framework March 2026 servicing releases updates appeared first on .NET Blog.

.NET and .NET Framework March 2026 servicing releases updates

10.03.2026 20:46 👍 0 🔁 0 💬 0 📌 0
Preview
CodeQL 2.24.3 adds Java 26 support and other improvements CodeQL is the static analysis engine behind GitHub code scanning, which finds and remediates security issues in your code. We’ve recently released CodeQL 2.24.3, which adds support for Java 26… The post CodeQL 2.24.3 adds Java 26 support and other improvements appeared first on The GitHub Blog.

CodeQL 2.24.3 adds Java 26 support and other improvements

10.03.2026 20:34 👍 0 🔁 0 💬 0 📌 0
Preview
Dependabot now supports pre-commit hooks GitHub Dependabot now natively supports automatic dependency updates for pre-commit hooks. By adding pre-commit as a package ecosystem in your dependabot.yml configuration, Dependabot will parse your .pre-commit-config.yaml, check each hook’s… The post Dependabot now supports pre-commit hooks appeared first on The GitHub Blog.

Dependabot now supports pre-commit hooks

10.03.2026 18:22 👍 0 🔁 0 💬 0 📌 0
Preview
Extend your coding agent with .NET Skills Introducing the dotnet/skills repository and how .NET agent skills can improve coding agent workflows. The post Extend your coding agent with .NET Skills appeared first on .NET Blog.

Extend your coding agent with .NET Skills

09.03.2026 20:43 👍 0 🔁 0 💬 0 📌 0
Preview
Visual Studio Dev Essentials: Free, Practical Tools for Every Developer When I first found Visual Studio Dev Essentials, it felt like discovering a hidden door in the developer toolkit world. I’d heard about free tools and cloud credits, but I wasn’t sure if it would really matter in day-to-day coding life. The short answer: it absolutely does. What struck me most was how the program was built with real developers in mind, and the fact […] The post Visual Studio Dev Essentials: Free, Practical Tools for Every Developer appeared first on Visual Studio Blog.

Visual Studio Dev Essentials: Free, Practical Tools for Every Developer

09.03.2026 17:32 👍 1 🔁 0 💬 0 📌 0
Preview
Under the hood: Security architecture of GitHub Agentic Workflows GitHub Agentic Workflows are built with isolation, constrained outputs, and comprehensive logging. Learn how our threat model and security architecture help teams run agents safely in GitHub Actions. The post Under the hood: Security architecture of GitHub Agentic Workflows appeared first on The GitHub Blog.

Under the hood: Security architecture of GitHub Agentic Workflows

09.03.2026 16:43 👍 0 🔁 0 💬 0 📌 0
Preview
A closer look at Work IQ Work IQ is the intelligence layer that personalizes Microsoft 365 Copilot to you and your organization. It is the “brain” behind Copilot that understands context, relationships, and work patterns, so Copilot and agents can be faster, more accurate, and more secure than model companies that are built on connectors alone. Work IQ is comprised of three tightly integrated layers – data, context, and skills & tools. We’ll examine the components in each of these layers, highlight some of what’s coming to make Work IQ even better in the coming months, and share example prompts you can try to see the power of Work IQ in action. Multi-Model Before we discuss the components within the layers of Work IQ it’s important to note our approach to the use of foundation models. Microsoft 365 Copilot brings leading models from multiple providers directly into Copilot experiences. We offer Copilot chat users a choice of foundation models from Open AI and Anthropic and will be bringing advanced models from other providers as well. This means users can access – and Work IQ will compliment – these models’ advanced reasoning and multi-step capabilities in their Copilot experiences, not just through specialized tools. Copilot applies the right model for the task and gives users the option to choose the model that best meets their needs. Data The foundation of Work IQ is its secure access to and understanding of both structured and unstructured data from Microsoft 365, Dynamics 365, Power Apps, and Power BI (coming soon), and other connected business systems that represent work happening across your organization. A customer’s Microsoft 365 tenant data provides Copilot with a foundational understanding of individual and collective work. Centered around the users and groups in an organization, it includes the permission-based, information-protected content stored in SharePoint and OneDrive, including Word, Excel, PowerPoint and other file types, as well as Outlook emails, and Teams meetings and chats. The data contained in your Microsoft 365 tenant also includes rich metadata and signals that further describe patterns of action, activity, collaboration, and communication between users and groups over time. Customers can ingest business data from other systems and line-of-business applications into their tenant using Copilot Connectors, enabling Copilot to reason over data that may reside in non-Microsoft systems. Customers can choose from hundreds of pre-built connectors or build their own custom connectors. In addition to Microsoft 365 data, we are integrating Dynamics 365 and Power Apps data into Work IQ. Dataverse is the container for the structured datasets that power customers’ Power Apps and Dynamics 365 applications. Later this month, customers will see Microsoft 365 Copilot embedded within Dynamics 365 Sales, and Dynamics 365 Customer Service as an in-app experience like Copilot in Word, PowerPoint and Excel. With this access, Copilot will have the ability to reason across both Microsoft 365 productivity data and the business data generated by their systems of record, making it possible for Copilot to answer complex questions like “Help me evaluate how issues raised by my parts supplier in our Teams call last week might impact my inventory and sales in the coming months” and to provide very detailed, specific answer connecting business communications with business data. For more information on Work IQ in Dynamics 365, check out this blog with additional details. We anticipate broad access to Dataverse via Work IQ across all Microsoft 365 Copilot experiences in Summer 2026. Context Implicit grounding in Microsoft 365 data – and soon Dynamics 365 and Power Apps data - serves as the baseline for understanding work context. Work IQ expands this with an additional, always-evolving layer of insights that enhance the speed and accuracy of Copilot’s response to queries. Work IQ helps Copilot learn how people and businesses work – the skills they have, the projects that are important, the frequency of collaboration, who they work with and for, critical workflows, the velocity of communication and much more. Copilot’s memory is designed to further enhance Copilot’s ability to tailor its experience to Copilot users in Copilot Chat and Copilot in the M365 apps. Memory is constructed from a combination of persistent, explicit memory and query-dependent, implicit memory. Explicit memory is provided to Copilot by the user. A user can personalize Copilot by creating “Custom Instructions” – for example a user may manually add an instruction to “Only provide responses to prompts in the active tense”. Alternatively, a user can also create “saved memories”. For example, prompting Copilot to “remember that I do not like responses in the passive tense” will result in Copilot creating a saved memory “Prefers responses written in active voice; dislikes passive tense”. In each case, Memory is explicitly created by a user action. Copilot also creates implicit memory. To do this, it uses chat history to infer a durable body of insights. As the body of insights grows, Copilot can provide increasingly personal responses and actions. Beyond chat history, we are also working on incorporating activity – like workflows - that help increase the fidelity of Copilot’s memory. In the coming months, we will start incorporating other activity patterns generated from all of your everyday apps including Teams, Outlook, Word, Excel, and PowerPoint. Stay tuned for additional updates on enhancements to Copilot memory. Understanding of data that provides work context about your Microsoft 365 tenant data is enhanced by the semantic index. Rather than relying solely on keyword or lexical matching, the semantic index enables Copilot to perform meaning‑based retrieval of your data, producing a bounded set of relevant candidates for downstream processing. Data ingested into the customer tenant using Copilot Connectors is also included in the semantic index. Indexed data retains customers’ existing security, privacy and governance policies – including permissions, sensitivity labels, and tenant boundaries. Finally, we’re adding business understanding to Work IQ to provide Copilot and agents with a more comprehensive view of customers’ Power Apps and Dynamics data in Dataverse. To do this, we’ve created a layer of semantic understanding, consisting of ontologies and glossaries that capture procedural knowledge from existing business workflows. The result: Copilot can have an expert understanding of the tasks performed by the people, teams, customers, suppliers, and other business entities working together to run your business. Skills & tools Work IQ includes agentic skills that provide specialized instructions to Copilot & agents. These skills are designed to help Copilot perform specific tasks with more speed and accuracy. Microsoft is continuously adding skills to Work IQ that enable Copilot to deliver experiences that are highly tailored for specific tasks like “schedule a meeting”, “find and retrieve data from an external source”, or “access meeting details and transcripts”. For example, we have deployed skills designed to optimize retrieval of work content in response to complex user queries. The result is improved Copilot responses to queries that may contain deeply sourced references to vague and hard-to-find archived information. If skills describe what to do, tools do it. We’re developing custom toolsets designed to execute against the intent of the agentic skills that power Work IQ experiences. We’re selecting the best-use tooling available – for example MCP server tools, agent flows, APIs and plugins – to help Copilot observe, retrieve, reason, and execute using the tool we create. Today customers can build agents and add skills and tools that, in combination with our orchestration services, can also be used with Work IQ. Going back to our complex content retrieval query, these are the tools that when combined, help us search, open and find content as described by the skill. This supports faster and more accurate responses and actions - while respecting governance policies, privacy and security. We expect to continuously experiment with and release new skills and tools for Work IQ. Security, Privacy, and Compliance We know how important it is that customers trust the AI services they use. Like the other components of Copilot, Work IQ is designed from the ground up to respect our customers’ already-existing user permissions, Security Group assignments, sensitivity labels, and Data Loss Prevention (DLP) policies. Similarly, we are committed to compliance with the legal and regulatory requirements of countries and regions around the world in which we operate, including the GDPR and EU Data Boundary. Experiences and Extensibility Work IQ is deeply integrated into Copilot. Licensed Microsoft 365 Copilot users will experience Work IQ in the responses and actions offered in Copilot Chat with the Work toggle activated and with Copilot in the M365 apps like Word, Excel, PowerPoint and Teams. Licensed Microsoft 365 users working with Copilot in Dynamics and Power Apps will find work IQ enriched with Dataverse data. In the coming months we will unify Work IQ experiences across all licensed Copilot surfaces. Developers can also integrate Work IQ into their own apps and agents. The Work IQ API exposes Copilot intelligence through a standard RESTful interface, enabling developers to build agents grounded in the live work context accessible through Copilot Chat - while inheriting Microsoft’s enterprise‑grade identity, security, permissions, and regulatory compliance. The Work IQ API will be available in Public Preview later this month. Note that we support CLI today, and we’re working to offer MCP and A2A support in the coming months. Below are sample prompts that showcase Work IQ in action. We’ve left some blanks so you can test them in your own environment: Copilot Chat: Over the past [timeframe] with I had a meeting with [person's first name], I asked about [specific topic or project]. Can you look for the information [person's first name] shared around this in that meeting? Copilot Chat: Identify tasks or action items assigned to me from my manager in this week's emails, Teams chats, and meeting notes, and compile them into a checklist. Copilot in Word: Draft an executive summary of [project] that highlights the purpose, progress, and implications of its impact on the business. Copilot in Outlook: Recommend how to resolve conflicts on my calendar for tomorrow. Copilot in Dynamics 365 Sales: Find the email with the latest price changes, analyze impact to my opportunities, and recommend next steps for me to communicate the changes. Researcher: What can you tell me about [Competitor name]? How do the offerings in [Competitor name] compare with [Company name] offerings? If you were in a compete situation against [Competitor name], what would you leverage as [Company name] response to our tooling and how a customer can accomplish the same thing? Pull in context from emails and Teams chats shared by the [Customer name] account team. Conclusion Work IQ combines powerful skills and tools with your work data and business context to help make Copilot more personalized, accurate, and trusted. We’re excited about Work IQ today and for the role it will play in helping our customers with their AI transformation. Whether you’re a Microsoft 365 Copilot user, an AI power user or a professional developer, we can’t wait to see what you do with Work IQ.

A closer look at Work IQ

09.03.2026 14:18 👍 0 🔁 0 💬 0 📌 0
Preview
From draft to done: agentic Copilot in Excel, Word, and PowerPoint Real work is iterative—context shifts, edits pile up, and version sprawl happens fast. Microsoft 365 Copilot is built for that reality. Copilot is now agentic, collaborating with you to take multi-step, app-native actions directly in Excel, Word, and PowerPoint so you can move work forward where you already work. With Work IQ, Copilot stays grounded in what’s current across your files, meetings, chats, and relationships. Changes are applied in the file—transparent, reviewable, and reversible—so you can iterate with confidence. And because everyone works in the same file, teams can coauthor and refine one version instead of passing copies around. Copilot honors Microsoft 365 permissions and sensitivity labels, and with built-in model choice, you can match the right model to the task without switching tools. Here’s what that means when Copilot creates and edits your Excel, Word, and PowerPoint files. Excel: natively build and refine spreadsheets Sequences shortened for demonstration purposes. Copilot in Excel helps you model scenarios, refresh inputs, and spot trends more efficiently by editing directly in your workbook with native tables, formulas, and charts. The goal isn’t just an answer—it’s a working model you trust. Copilot applies updates to the grid so you can review changes, tweak assumptions, and iterate as inputs shift. You can also choose the OpenAI or Anthropic model for in-app editing to match the task. This experience now supports locally stored workbooks, file uploads, and file search, with Work IQ grounding rolling out later this month. Additional model options—including GPT-5.4 and Claude Opus 4.6—will roll out in the coming weeks. Prompts to try Build a financial forecast from scratch: Create a P&L forecast using the latest data in [Operating Model.xlsx], including revenue, cost of goods sold, and operating expenses. Build out for 12 months starting January 2026 and design the model to show month-by-month growth, retention, and unit economics with adjustable assumptions. Refresh sales performance data in a shareable format: Replicate last week’s analysis in a new sheet using the latest numbers shared in [Sales Weekly Update.docx] and show top insights on how our business is performing. And help me turn it into something easy to share, with visualizations of customer churn trends and clearly highlight the areas of concern. Create a valuation model with sensitivity analysis: Create a discounted cash flow model for [company name] by pulling their latest disclosures from the web. Assume free cash flows grow at 12% annually for 5 years and use a discount rate of 9% with a terminal growth rate of 2%. Add sensitivity analysis tables showing how the valuation changes if the discount rate varies from 8% to 10%, growth rate from 10% to 14%, and terminal growth from 1% to 3%. Include a dropdown selector for best, base, and worst-case scenarios and use standard financial formatting. Availability: Generally available today in Excel for Windows, web, and Mac for Microsoft 365 Copilot users. Word: collaborate to turn drafts into review-ready docs Sequences shortened for demonstration purposes. Copilot in Word helps you turn working drafts into review-ready documents. It works in the same document your team is editing, so you can coauthor and iterate without creating—or reconciling—copies. Ask Copilot to update or fill in a template-based doc, add an executive summary, and suggest edits that incorporate stakeholder feedback into a draft that’s ready to share. It can restructure sections, rewrite for clarity, and apply Word-native styles so edits are easy to review. With Work IQ, Copilot can keep the document aligned to your latest work context (decisions, dates, and details), so you spend less time reconciling updates. This editing experience is available today, and model choice with OpenAI and Anthropic models will be available in April. Prompts to try Create a project status + decision brief: Create a project brief for [project title] using information from my meetings, files, and emails from the past [timeframe]. Include an Executive Summary at the top. Structure the document using Heading 1 and Heading 2 styles where appropriate. Include these sections: Current status, Key decisions and progress, Next steps, Unresolved issues or risks, Recommendations. Keep it factual and flag any missing inputs as questions at the end. Refresh a recurring monthly exec report: This is last month’s executive update report. Help me update the entire report with this [month] updates by pulling the latest from my recent team emails and Teams chats. Remove last month’s updates and replace them with the newest ones. Only edit the last 3 columns of the table; update status with an appropriate icon (green = on track, yellow = at risk, red = off track). If a status is not on track, explain why in the “latest developments” column. Apply all new text updates in blue font. Formatting and polish pass: Make the structure and formatting consistent and easy to scan—fix heading hierarchy, spacing between sections, fonts, lists, and tables. Reorganize any long or messy sections into clearer chunks using subheadings or tables where helpful. Align the tone and formatting with recent proposals or reports I’ve worked on so it matches our usual style. Tighten up wordy sections, flag places where the structure could be clearer, and sanity‑check any market, customer, or competitor claims using readily available high‑level information where it’s easy to do. The goal is a clean, professional document that’s ready to share without additional formatting or structural edits. Availability: Generally available today in Word on Windows, web, and Mac for Microsoft 365 Copilot users. PowerPoint: keep decks crisp and on-brand Sequences shortened for demonstration purposes. PowerPoint is introducing Copilot editing experiences to help you co-create presentations that are clear, visual, and on brand. Copilot respects your organization’s templates and themes by using approved colors, layouts, object styles, and images so presentations stay consistent without extra effort. Use Copilot in existing decks to sharpen a slide’s main message, simplify dense bullets, and turn text into clearer visuals like timelines, diagrams, and charts without redesigning from scratch. It’s useful when you’re polishing a slide for stakeholders—tightening the message in a shared deck so the team can iterate together, without version sprawl. This experience will continue to expand across more scenarios over time, including model choice with OpenAI and Anthropic models, explicit grounding in your files, emails, meetings, and chats, and implicitly through Work IQ. Prompts to try Create a branded presentation with web data: “Create an executive presentation on the major market pressures and trends shaping [industry]. Include a high‑level competitive analysis of leading players, outlining their relative strengths, weaknesses, and strategic focus areas, and conclude with implications for industry leaders.” Transform text into a chart: “Transform text on this slide into a chart.” Brand check (using established brand kit): “Check this slide for brand consistency from my organization’s guidelines.” Availability: PowerPoint is rolling out to web for Microsoft 365 Copilot users. It will become available on Windows and Mac in the coming months. This is about supporting real work the way it is done: turning in-progress work into outcomes you’re ready to share—review-ready docs, trusted models, and on-brand slides. Try agentic Copilot in Word, Excel, and PowerPoint today. Additional helpful resources Support documentation for editing with Copilot: Excel Word PowerPoint Managing your brand kit and templates for PowerPoint: Create and manage official Brand kits in the Microsoft 365 Copilot app Keep your presentation on-brand with Copilot

From draft to done: agentic Copilot in Excel, Word, and PowerPoint

09.03.2026 14:18 👍 0 🔁 0 💬 0 📌 0
Preview
Enable agents to bring apps into the flow of work—while keeping IT in control A seller needs to log a new opportunity. A manager wants to approve a request. A marketer has to update a campaign asset. Until today, these actions often meant taking insights from Microsoft 365 Copilot and switching tabs. Agents can now change that: helping people take action in their go-to work apps, without needing to leave chat in Copilot. But enabling this kind of capability raises real questions for IT: What risks do these agents introduce? Are they actually being used? And are they behaving as expected? The more agents you launch and the more powerful these agents are, the more these answers matter. That’s why we’re introducing three new capabilities across Copilot and Microsoft Copilot Studio that help people move work forward faster—while keeping IT firmly in control: Enhanced agents that bring apps directly into chat in Copilot New ways for employees to find the right agent, fast Tools to continuously evaluate agent quality over time With these capabilities, employees can use their go-to business apps directly in Copilot and get a simpler way to discover the right agents for their tasks. Meanwhile, IT gains objective signals that help validate agent behavior as usage expands. Here's what you need to know. Interacting with apps through chat in Copilot Today, the gap between AI insight and in-app execution starts to close—without IT needing to relax standards or introduce new risk vectors. Launch Demo When an employee prompts Copilot and calls an agent connected to an approved app, that agent can bring that app’s interactive experience directly into the conversation. From there, the employee stays in the driver's seat, using chat in Copilot to take real, in‑app actions such as: Scheduling a new event in Outlook Adding a new sales opportunity to Dynamics 365 Sales Creating or editing a flyer in Adobe Express Completing an approval form via Microsoft Power Apps All of this happens without needing to leave Copilot. Employees interact with the app directly in chat or use follow-up prompts to carry out work in the app. Get started quickly with pre-built app experiences This month, we’re launching support for a focused set of early experiences, including: Microsoft apps, such as Outlook, Dynamics 365 Customer Service (public preview by early April), and Dynamics 365 Sales (public preview by early April) Custom line-of-business apps built with Power Apps (public preview this March) Take Outlook, for example. You can now tell Copilot who you want to meet with, and it'll find time slots that work. Simply select one, and an agent will schedule that time together. This experience is currently generally available (GA). Similarly, you can ask Copilot to draft an email on your behalf, edit it, and hit send—without leaving the chat (currently in Frontier). We will also introduce in-chat experiences for a handful of Microsoft partner apps, including Adobe Express, Adobe Acrobat, Base44, Box, Canva, Coursera, Figma, Miro, Monday.com, Optimizely, and Wix. All pre-built partner app experiences will be available via the Microsoft 365 Agent Store by mid-April. “With the Figma app in Copilot, you can turn conversations into AI-generated FigJam diagrams to take ideas further,” says Brendan O'Driscoll, Figma’s VP of Product. “By connecting Figma with your favorite tools, it’s easier than ever to visualize, iterate, and collaborate with your entire team.” Build the app experiences your team needs You’re not limited to the apps we ship out of the box. Your team can build agents in Copilot that work with the mission-critical apps that your systems, processes, and workflows depend on. Under the hood, two open extensibility standards make this possible: MCP Apps and the OpenAI Apps SDK. Both give development teams a structured way to connect the apps your organization relies on to agents in Copilot—so those apps can surface interactive experiences directly in chat. Agents built with either standard use familiar development patterns, so your team can build and iterate without requiring a steep learning curve. MCP Apps and Apps SDK will roll out to GA on web and desktop later this month, with mobile following this spring. Share the Apps SDK and MCP Apps technical documentation with your development team to get started. Get to know the IT controls Even as agents become more powerful, we’ve designed this experience with governance in mind. Agents with interactive app experiences use the same governance and admin patterns you already trust for agents in Copilot, keeping IT control the top priority. You decide which agents are available in your tenant, and who can use them—globally, per agent, or for specific departments. Each agent operates strictly within existing app permissions and identity boundaries, so you can enable richer experiences in Copilot without opening new, unmanaged entry points into your environment. All agents can be monitored end‑to‑end using Agent 365—a unified control plane that gives IT a single place to see which agents are live, where they can act, and how they're being used. With it, you can control how agents are provisioned and scoped before rolling out this new experience broadly. Learn how to provision your organization's agents at scale. Empowering employees to find the right agent fast As agents in Microsoft 365 Copilot become more capable, employees need a reliable way to find the right agent for the task at hand. But when dozens of agents are available, employees shouldn't have to know which one to use when. Agent Recommendations (generally available) surfaces the right agent at the right moment, directly in the flow of work. When users prompt Microsoft 365 Copilot, the system analyzes their intent and suggests an agent that’s already installed and approved by IT. No special syntax or prompt engineering required. These recommendations are assistive, meaning employees can choose to start a new conversation with the suggested agent or continue in their current chat. All the while, discoverability only happens within known, governed boundaries —mitigating the introduction of new risks. This helps employees quickly find agents purpose-built for the scenario at hand, while IT maintains a consistent governance model as usage expands. Holding agents to your organization's standards As organizations rely on more agents for more impactful work, quality and reliability stop being nice‑to‑haves—they’re essential. Small changes to prompts, models, or data can introduce drift that can be hard to detect, especially as agent usage expands across teams and scenarios. Agent Evaluations in Microsoft Copilot Studio (currently in public preview) gives you a structured way to answer the question: Is this agent actually doing what it's supposed to do? Evals work by running agents against authentic questions and scenarios, then generating objective scores for accuracy and intent alignment—so quality isn't just assumed; it's measured. By comparing results over time, teams can help catch regressions earlier, validate improvements, and apply a consistent quality bar before agents reach broader use. These signals reinforce that agents aren’t set‑and‑forget automation; they’re managed enterprise workloads. With objective evidence in hand, IT and makers can make informed rollout decisions and scale agent usage more confidently, knowing behavior is monitored, and reliability can be improved as usage grows. Learn how to set up Agent Evals in Microsoft Copilot Studio, so you can assess agent quality and readiness before expanding usage. Make agents more capable while staying in control Support for apps in agents, Agent Recommendations, and Agent Evals are designed to work together as a system, helping organizations move faster—without compromising trust. By treating agents as first‑class, governed workloads, IT teams can enable more capable agents while maintaining the control their organizations expect. To get started: Learn how dev teams build with Apps SDK and MCP Apps Control agents from end-to-end with Agent 365 Discover how to configure Agent Evals

Enable agents to bring apps into the flow of work—while keeping IT in control

09.03.2026 14:18 👍 0 🔁 0 💬 0 📌 0
Preview
Can Your M365 Copilot Offer Actually Win Customer Deals? Note: This tool is designed for Microsoft partners operating within the AI Business Solutions commercial solution area who are building or publishing Microsoft 365 Copilot consulting service offers on the Microsoft Marketplace. Creating Offers That Win Deals Microsoft 365 Copilot represents one of the most significant opportunities for Microsoft partners. Organizations worldwide are seeking trusted partners to guide their AI transformation journey. The Microsoft Marketplace is where these customers discover partner services—making your offer listing your most critical sales asset. A great Copilot service deserves a great marketplace listing—and getting there means balancing multiple dimensions at once: Distributed Guidance: Offer development best practices are distributed across multiple documentation sources, slide decks, and training materials Manual Review Burden: Evaluating whether your offer content addresses all required elements—value proposition, business outcomes, change management methodology, co-sell alignment—requires hours of cross-referencing Quality Consistency: Without a systematic approach, offer descriptions may miss critical customer-facing elements that drive engagement Co-sell Alignment Complexity: Achieving proper alignment with the AI Business Solutions solution area requires understanding evolving co-sell requirements Iteration Overhead: Each content revision requires repeating the manual evaluation process, extending time-to-market Getting these elements right is what separates offers that convert from those that get scrolled past. Introducing M365 Copilot Offer Validation Assessment M365 Copilot Offer Validation Assessment eliminates guesswork from marketplace offer development. This tool analyzes your offer content against Microsoft's official M365 Copilot Offer Development Guide and generates actionable, prioritized recommendations—in minutes. How It Works Document Upload: Assess your offer content before publishing by uploading PDF, Word, or PowerPoint documents containing your draft listing content 35+ Evaluation Rules: Comprehensive rule engine evaluates your content across 13 service areas derived directly from Microsoft guidance AI-Contextualized Feedback: Every observation is translated into Copilot-specific improvement recommendations with references to source guidance Local Execution: Runs entirely on your machine. Your offer content never leaves your environment What You Get The assessment generates detailed Excel report as shown: Report showing the outcome after assessment. Report includes: Column Description Area Category of guidance (Value Proposition, Business Outcomes, etc.) Configuration Specific rule being evaluated Status Pass, Needs Review, or Gap identified Priority High, Medium, or Low importance Observation What was found (or not found) in your content Recommendation Specific improvement action with guidance reference Reference Link to Microsoft documentation Evaluation Coverage The tool evaluates your offer against 13 critical service areas: 1. Offer Summary — 100-200 character value-led summary 2. Offer Description — Comprehensive 500-3000 character description with target audience, deliverables, and scope 3. Search Keywords — Copilot, M365, AI, and industry-specific discoverability terms 4. Value Proposition — Clear articulation of how you maximize Copilot ROI 5. Business Outcomes — Measurable productivity gains, time savings, and efficiency metrics 6. Functional Scenarios — Target roles (Sales, HR, Finance, Legal, Customer Service) with relevant KPIs 7. Service Pillars — Coverage of Advisory, Deployment, Extensibility, and Adoption services 8. Adoption & Change Management — Methodology, stakeholder engagement, training, and success measurement 9. Call to Action — Clear engagement path for interested customers 10. Pricing — Transparent pricing structure and engagement terms 11. Media Assets — Collateral references, case studies, and promotional materials 12. Security & Compliance — Data governance, oversharing risks, and technical readiness 13. Co-sell Readiness — AI Business Solutions solution area and Microsoft 365 product alignment Value for Partners Tool helps partners to: Reach Quality Faster Reduce the time from draft to polished offer. Automated validation shows exactly where to improve—so each revision moves you closer to a listing that converts. Maximize Co-sell Opportunity The tool specifically validates alignment with Microsoft's AI Business Solutions solution area and co-sell requirements. Properly aligned offers receive prioritized attention from Microsoft sellers—driving qualified customer referrals. Differentiate Your Services Stand out in the marketplace with offers that clearly communicate: Measurable business outcomes customers can expect Specific functional scenarios and target roles Formal change management methodology Security and governance expertise Standardize Quality Whether you're a large partner with multiple practice leads creating offers or a boutique firm with a single marketplace presence, the tool ensures consistent evaluation against the same Microsoft guidance standards. Iterate with Confidence Make changes to your offer content and re-run the assessment to verify improvements. Timestamped reports track your progress across multiple evaluation cycles. Value for Customers When partners use this tool, customers benefit from: Clearer Value Communication Offers created with guidance-aligned content clearly articulate what customers will receive, enabling faster and more confident purchasing decisions. Outcome-Focused Engagements Partners prompted to include measurable business outcomes help customers set realistic expectations and track Copilot deployment success. Reduced Risk Offers that address security, compliance, and governance concerns upfront give customers confidence that data protection is a priority—not an afterthought. Better Partner Matching Well-structured offers with specific functional scenarios and industry focus help customers identify partners whose expertise matches their needs. Getting Started M365 Copilot Offer Validation Assessment is available now. Requirements are minimal: Python 3.8 or later PDF, Word, or PowerPoint document with your offer content Start in three steps: 1. Install Python dependencies: `pip install -r requirements.txt` 2. Run the tool: `python main.py` 3. Upload your document when prompted Within minutes, you'll have a comprehensive assessment report with prioritized recommendations tailored to your specific offer content. Elevate Your Marketplace Presence In a crowded marketplace, quality matters. Customers searching for M365 Copilot partners have dozens of options—your offer listing is your opportunity to stand out. M365 Copilot Offer Validation Assessment provides the objective, guidance-driven analysis you need to create marketplace listings that communicate your expertise, align with Microsoft co-sell priorities, and convert customer interest into engagement. Stop guessing whether your offer meets the bar. Start validating. Resources M365 Copilot Offer Validation Assessment: microsoft/m365-copilot-marketplace-offer-assessment M365 Copilot Offer Development Guide: https://microsoftpartners.microsoft.com/Downloads/?filename=abs/unprotected/M365-Copilot-Offer-Development-Guide.pptx Microsoft Marketplace Documentation: https://learn.microsoft.com/partner-center/marketplace/

Can Your M365 Copilot Offer Actually Win Customer Deals?

09.03.2026 08:17 👍 0 🔁 0 💬 0 📌 0
Preview
Available today: GPT-5.4 Thinking in Microsoft 365 Copilot Today, we’re bringing OpenAI’s GPT‑5.4 Thinking to Microsoft 365 Copilot and Microsoft Copilot Studio—available in addition to the recent GPT-5.3 Instant update. With GPT‑5.4 Thinking, Copilot can think deeper on complex work by combining advances in reasoning, coding, and agentic workflows—helping it work through technical prompts and longer tasks with higher-quality outputs, and less back-and-forth. Work IQ brings relevant work context into Copilot so it can reason, personalize, and help you turn deeper thinking into context-aware drafts, slides, and spreadsheets. We are committed to bringing you the latest cutting-edge AI innovation and model choice built for work and tailored to your business needs—with the security, compliance, and privacy that you expect from Microsoft. Get started today GPT-5.4 Thinking is now available in Copilot Studio early release cycle environments and begins rolling out today to Microsoft 365 Copilot users with priority access and Microsoft 365 Copilot Chat users with standard access. Learn more about standard versus priority access here. In Copilot Chat, you can select GPT‑5.4 Think deeper from the model selector under More, and in Copilot Studio you can select GPT‑5.4 Reasoning. Our team will continue to refine the experience based on your feedback. Learn more about Microsoft 365 Copilot and Microsoft Copilot Studio and start transforming work with Copilot today. For model details, learn more about GPT-5.4 Thinking here. For the latest research insights on the future of work and generative AI, visit WorkLab.

Available today: GPT-5.4 Thinking in Microsoft 365 Copilot

06.03.2026 23:08 👍 0 🔁 0 💬 0 📌 0
Preview
How to scan for vulnerabilities with GitHub Security Lab’s open source AI-powered framework GitHub Security Lab Taskflow Agent is very effective at finding Auth Bypasses, IDORs, Token Leaks, and other high-impact vulnerabilities. The post How to scan for vulnerabilities with GitHub Security Lab’s open source AI-powered framework appeared first on The GitHub Blog.

How to scan for vulnerabilities with GitHub Security Lab’s open source AI-powered framework

06.03.2026 22:31 👍 1 🔁 0 💬 0 📌 0
Preview
Figma MCP server can now generate design layers from VS Code GitHub Copilot users can now connect to the Figma MCP server to both pull design context into code and send rendered UI to Figma as editable frames. Together, these capabilities… The post Figma MCP server can now generate design layers from VS Code appeared first on The GitHub Blog.

Figma MCP server can now generate design layers from VS Code

06.03.2026 21:52 👍 1 🔁 0 💬 0 📌 0
Preview
GitHub Copilot in Visual Studio Code v1.110 – February release The Visual Studio Code February 2026 release makes agents practical for longer-running and more complex tasks. This gives you more control over how they run, new ways to extend what… The post GitHub Copilot in Visual Studio Code v1.110 – February release appeared first on The GitHub Blog.

GitHub Copilot in Visual Studio Code v1.110 – February release

06.03.2026 20:46 👍 0 🔁 0 💬 0 📌 0
Preview
From Manual Document Processing to AI-Orchestrated Intelligence Building an IDP Pipeline with Azure Durable Functions, DSPy, and Real-Time AI Reasoning The Problem Think about what happens when a loan application, an insurance claim, or a trade finance document arrives at an organisation. Someone opens it, reads it, manually types fields into a system, compares it against business rules, and escalates for approval. That process touches multiple people, takes hours or days, and the accuracy depends entirely on how carefully it's done. Organizations have tried to automate parts of this before — OCR tools, templated extraction, rule-based routing. But these approaches are brittle. They break when the document format changes, and they can't reason about what they're reading. The typical "solution" falls into one of two camps: Manual processing. Humans read, classify, and key in data. Accurate but slow, expensive, and impossible to scale. Single-model extraction. Throw an OCR/AI model at the document, trust the output, push to downstream systems. Fast but fragile — no validation, no human checkpoint, no confidence scoring. What's missing is the middle ground: an orchestrated, multi-model pipeline with built-in quality gates, real-time visibility, and the flexibility to handle any document type without rewriting code. That's what IDP Workflow is — a six-step AI-orchestrated pipeline that processes documents end to end, from a raw PDF to structured, validated data, with human oversight built in. This isn't automation replacing people. It's AI doing the heavy lifting and humans making the final call. Architecture at a Glance POST /api/idp/start → Step 1 PDF Extraction (Azure Document Intelligence → Markdown) → Step 2 Classification (DSPy ChainOfThought) → Step 3 Data Extraction (Azure Content Understanding + DSPy LLM, in parallel) → Step 4 Comparison (field-by-field diff) → Step 5 Human Review (HITL gate — approve / reject / edit) → Step 6 AI Reasoning Agent (validation, consolidation, recommendations) → Final structured result The backend is Azure Durable Functions (Python) on Flex Consumption — customers only pay for what they use, and it scales automatically. The frontend is a Next.js dashboard with SignalR real-time updates and a Reaflow workflow visualization. Every step broadcasts stepStarted → stepCompleted / stepFailed events so the UI updates as work progresses. The pattern applies wherever organisations receive high volumes of unstructured documents that need to be classified, data-extracted, validated, and approved. The Six Steps, Explained Step 1: PDF → Markdown We use Azure Document Intelligence with the prebuilt-layout model to convert uploaded PDFs into structured Markdown — preserving tables, headings, and reading order. Markdown turns out to be a much better intermediate representation for LLMs than raw text or HTML. class PDFMarkdownExtractor: async def extract(self, pdf_path: str) -> tuple[PDFContent, Step01Output]: poller = self.client.begin_analyze_document( "prebuilt-layout", analyze_request=AnalyzeDocumentRequest(url_source=pdf_path), output_content_format=DocumentContentFormat.MARKDOWN, ) result: AnalyzeResult = poller.result() # Split into per-page Markdown chunks... Output: Per-page Markdown content, total page count, and character stats. Step 2: Document Classification (DSPy) Rather than hard-coding classification rules, we use DSPy with ChainOfThought prompting. DSPy lets us define classification as a signature — a declarative input/output contract — and the framework handles prompt optimization. class DocumentClassificationSignature(dspy.Signature): """Classify document page into predefined categories.""" page_content: str = dspy.InputField(desc="Markdown content of the document page") available_categories: str = dspy.InputField(desc="Available categories") classification: DocumentClassificationOutput = dspy.OutputField() Categories are loaded from a domain-specific classification_categories.json. Adding new categories means editing a JSON file, not code. Critically, classification is per-page, not per-document. A multi-page loan application might contain a loan form on page 1, income verification on page 2, and a property valuation on page 3 — each classified independently with its own confidence score and detected field indicators. This means multi-section documents are handled correctly downstream. Why DSPy? It gives us structured, typed outputs via Pydantic models, automatic prompt optimization, and clean separation between the what (signature) and the how (ChainOfThought, Predict, etc.). Step 3: Dual-Model Extraction (Run in Parallel) This is where things get interesting. We run two independent extractors in parallel: Azure Content Understanding (CU): A specialized Azure service that takes the raw PDF and applies a domain-specific schema to extract structured fields. DSPy LLM Extractor: Uses the Markdown from Step 1 with a dynamically generated Pydantic model (built from the domain's extraction_schema.json) to extract the same fields via an LLM. The LLM provider is selectable at runtime — Azure OpenAI, Claude, or open-weight models deployed on Azure (Qwen, DeepSeek, Llama, Phi, and more from the Azure AI Model Catalog). # In the orchestrator — fire both tasks at once azure_task = context.call_activity("activity_step_03_01_azure_extraction", input) dspy_task = context.call_activity("activity_step_03_02_dspy_extraction", input) results = yield context.task_all([azure_task, dspy_task]) Both extractors use the same domain-specific schema but approach the problem differently. Running two models gives us a natural cross-check: if both extractors agree on a field value, confidence is high. If they disagree, we know exactly where to focus human attention — not the entire document, just the specific fields that need it. Multi-Provider LLM Support The DSPy extraction and classification steps aren't locked to a single model provider. From the dashboard, users can choose between: Azure OpenAI in Foundry Models — GPT-4.1, o3-mini (default) Claude on Azure — Anthropic's Claude models Foundry Models — Open-weight models deployed on Azure via Foundry Models: Qwen 2.5 72B, DeepSeek V3/R1, Llama 3.3 70B, Phi-4, and more The third option is key: instead of routing through a third-party service, you deploy open-weight models directly on Azure as serverless API endpoints through Azure AI Foundry. These endpoints expose an OpenAI-compatible API, so DSPy talks to them the same way it talks to GPT-4.1 — just with a different api_base. You get the model diversity of the open-weight ecosystem with Azure's enterprise security, compliance, and network isolation. A factory pattern in the backend resolves the selected provider and model at runtime, so switching from Azure OpenAI to Qwen on Azure AI is a single dropdown change — no config edits, no redeployment. This makes it easy to benchmark different models against the same extraction schema and compare quality. Step 4: Field-by-Field Comparison The comparator aligns the outputs of both extractors and produces a diff report: matching fields, mismatches, fields found by only one extractor, and a calculated match percentage. This feeds directly into the human review step. Output: "Match: 87.5% (14/16 fields)" Step 5: Human-in-the-Loop (HITL) Gate The pipeline pauses and waits for a human decision. The Durable Functions orchestrator uses wait_for_external_event() with a configurable timeout (default: 24 hours) implemented as a timer race: review_event = context.wait_for_external_event(HITL_REVIEW_EVENT) timeout = context.create_timer( context.current_utc_datetime + timedelta(hours=HITL_TIMEOUT_HOURS) ) winner = yield context.task_any([review_event, timeout]) The frontend shows a side-by-side comparison panel where reviewers can see both values for each disputed field — pick Azure's value, the LLM's value, or type in a correction. They can add notes explaining their decision, then approve or reject. If nobody responds within the timeout, it auto-escalates (configurable behavior). The orchestrator doesn't poll. It doesn't check a queue. The moment the reviewer submits their decision, the pipeline resumes automatically — using Durable Functions' native external event pattern. Step 6: AI Reasoning Agent The final step uses an AI agent with tool-calling to perform structured validation, consolidate field values, and generate a confidence score. This isn't just a prompt — it's an agent backed by the Microsoft Agent Framework with purpose-built tools: validate_fields — runs domain-specific validation rules (data types, ranges, cross-field logic) consolidate_extractions — merges Azure CU + DSPy outputs using confidence-weighted selection generate_summary — produces a natural-language summary with recommendations The reasoning step can use standard models or reasoning-optimised models like o3 or o3-mini for higher-stakes validation. The agent streams its reasoning process to the frontend in real time — validation results, confidence scoring, and recommendations all appear as they're generated. Domain-Driven Design: Zero-Code Extensibility One of the most powerful design choices: adding a new document type requires zero code changes. Each domain is a folder under idp_workflow/domains/ with four JSON files: idp_workflow/domains/insurance_claims/ ├── config.json # Domain metadata, thresholds, settings ├── classification_categories.json # Page-level classification taxonomy ├── extraction_schema.json # Field definitions (used by both extractors) └── validation_rules.json # Business rules for the reasoning agent The extraction_schema.json is particularly interesting — it's consumed by both the Azure CU service (which builds an analyzer from it) and the DSPy extractor (which dynamically generates a Pydantic model at runtime): def create_extraction_model_from_schema(schema: dict) -> type[BaseModel]: """Dynamically create a Pydantic model from an extraction schema JSON.""" # Maps schema field definitions → Pydantic field annotations # Supports nested objects, arrays, enums, and optional fields We currently ship four domains out of the box: insurance claims, home loans, small business lending, and trade finance. See It In Action: Processing a Home Loan Application To make this concrete, here's what happens when you process a multi-page home loan PDF — personal details, financial tables, and mixed content. Upload & Extract. The document hits the dashboard and Step 1 kicks off. Azure Document Intelligence converts all pages to structured Markdown, preserving tables and layout. You can preview the Markdown right in the detail panel. Per-Page Classification. Step 2 classifies each page independently: Page 1 is a Loan Application Form, Page 2 is Income Verification, Page 3 is a Property Valuation. Each has its own confidence score and detected fields listed. Dual Extraction. Azure CU and the DSPy LLM extractor run simultaneously. You can watch both progress bars in the dashboard. Comparison. The system finds 16 fields total. 14 match between the two extractors. Two fields differ — the annual income figure and the loan term. Those are highlighted for review. Human Review. The reviewer sees both values side by side for each disputed field, picks the correct value (or types a correction), adds a note, and approves. The moment they submit, the pipeline resumes — no polling. AI Reasoning. The agent validates against home loan business rules: loan-to-value ratio, income-to-repayment ratio, document completeness. Validation results stream in real time. Final output: 92% confidence, 11 out of 12 validations passed. The AI flags a minor discrepancy in employment dates and recommends approval with a condition to verify employment tenure. Result: A document that would take 30–45 minutes of manual processing, handled in under 2 minutes — with complete traceability. Every step, every decision, timestamped in the event log. Real-Time Frontend with SignalR Every orchestration step broadcasts events through Azure SignalR Service, targeted to the specific user who started the workflow: def _broadcast(context, user_id, event, data): return context.call_activity("notify_user", { "user_id": user_id, "instance_id": context.instance_id, "event": event, "data": data, }) The frontend generates a session-scoped userId, passes it via the x-user-id header during SignalR negotiation, and receives only its own workflow events. No Pub/Sub subscriptions to manage. The Next.js frontend uses: Zustand + Immer for state management (4 stores: workflow, events, reasoning, UI) Reaflow for the animated pipeline visualization React Query for data fetching Tailwind CSS for styling The result is a dashboard where you can upload a document and watch each pipeline step execute in real time. Infrastructure: Production-Ready from Day One The entire stack deploys with a single command using Azure Developer CLI (azd): azd up What gets provisioned: ResourcePurposeAzure Functions (Flex Consumption)Backend API + orchestrationAzure Static Web AppNext.js frontendDurable Task SchedulerOrchestration state managementStorage AccountDocument blob storageApplication InsightsMonitoring and diagnosticsNetwork Security PerimeterStorage network lockdown Infrastructure is defined in Bicep with: Parameterized configuration (memory, max instances, retention) RBAC role assignments via a consolidated loop Two-region deployment (Functions + SWA have different region availability) Network Security Perimeter deployed in Learning mode, switched to Enforced post-deploy Key Engineering Decisions Why Durable Functions? Orchestrating a multi-step pipeline with parallel execution, external event gates, timeouts, and retry logic is exactly what Durable Functions was designed for. The orchestrator is a Python generator function — each yield is a checkpoint that survives process restarts: def idp_workflow_orchestration(context: DurableOrchestrationContext): step1 = yield from _execute_step(context, ...) # PDF extraction step2 = yield from _execute_step(context, ...) # Classification results = yield context.task_all([azure_task, dspy_task]) # Parallel extraction # ... HITL gate, reasoning agent, etc. No external queue management. No state database. No workflow engine to operate. Why Dual Extraction? Running two independent models on the same document gives us: Cross-validation — agreement between models is a strong confidence signal Coverage — one model might extract fields the other misses Auditability — human reviewers can see both outputs side by side Graceful degradation — if one service is down, the other still produces results Why DSPy over Raw Prompts? DSPy provides: Typed I/O — Pydantic models as signatures, not string parsing Composability — ChainOfThought, Predict, ReAct are interchangeable modules Prompt optimization — once you have labeled examples, DSPy can auto-tune prompts LM scoping — with dspy.context(lm=self.lm): isolates model configuration per call Getting Started # Clone git clone https://github.com/lordlinus/idp-workflow.git cd idp-workflow # DTS Emulator (requires Docker) docker run -d -p 8080:8080 -p 8082:8082 \ -e DTS_TASK_HUB_NAMES=default,idpworkflow \ mcr.microsoft.com/dts/dts-emulator:latest # Backend python -m venv .venv && source .venv/bin/activate pip install -r requirements.txt func start # Frontend (separate terminal) cd frontend && npm install && npm run dev You'll also need Azurite (local storage emulator) running, plus Azure OpenAI, Document Intelligence, Content Understanding, and SignalR Service endpoints configured in local.settings.json. See the Local Development Guide for the full setup. Who Is This For? If any of these sound familiar, IDP Workflow was built for you: "We're drowning in documents." — High-volume document intake with manual processing bottlenecks. "We tried OCR but it breaks on new formats." — Brittle extraction that fails when layouts change. "Compliance needs an audit trail for every decision." — Regulated industries where traceability is non-negotiable. This is an AI-powered document processing platform — not a point OCR tool — with human oversight, dual AI validation, and domain extensibility built in from day one. What's Next Prompt optimization — using DSPy's BootstrapFewShot with domain-specific training examples Batch processing — fan-out/fan-in orchestration for processing document queues Custom evaluators — automated quality scoring per domain Additional domains — community-contributed domain configurations Try It Out The project is fully open source: github.com/lordlinus/idp-workflow Deploy to your own Azure subscription with azd up, upload a PDF from the sample_documents/ folder, and watch the pipeline run. We'd love feedback, contributions, and new domain configurations. Open an issue or submit a PR!

From Manual Document Processing to AI-Orchestrated Intelligence

06.03.2026 05:21 👍 0 🔁 0 💬 0 📌 0
Preview
GPT-5.4 is generally available in GitHub Copilot GPT-5.4, OpenAI’s latest agentic coding model, is now rolling out in GitHub Copilot. In our early testing of real-world, agentic, and software development capabilities, GPT-5.4 consistently hits new rates of… The post GPT-5.4 is generally available in GitHub Copilot appeared first on The GitHub Blog.

GPT-5.4 is generally available in GitHub Copilot

06.03.2026 00:51 👍 0 🔁 0 💬 0 📌 0
Preview
Discover and manage agent activity with new session filters GitHub Enterprise AI Controls and agent control plane now includes additional session filters, making it easier to discover and manage agent activity across your enterprise. What’s new In addition to… The post Discover and manage agent activity with new session filters appeared first on The GitHub Blog.

Discover and manage agent activity with new session filters

06.03.2026 00:51 👍 0 🔁 0 💬 0 📌 0
Preview
Quick access to merge status in pull requests is in public preview We are rolling out the pull request merge status at the top of every pull request page! Check merge readiness from anywhere in the pull request experience, including the new… The post Quick access to merge status in pull requests is in public preview appeared first on The GitHub Blog.

Quick access to merge status in pull requests is in public preview

05.03.2026 23:32 👍 0 🔁 0 💬 0 📌 0
Preview
GitHub Copilot coding agent for Jira is now in public preview You can now assign Jira issues to GitHub Copilot coding agent, our asynchronous, autonomous agent, and get AI-generated draft pull requests created in your GitHub repository. When you assign a… The post GitHub Copilot coding agent for Jira is now in public preview appeared first on The GitHub Blog.

GitHub Copilot coding agent for Jira is now in public preview

05.03.2026 23:03 👍 0 🔁 0 💬 0 📌 0
Preview
Insiders (version 1.111) Learn what is new in Visual Studio Code 1.111 (Insiders) Read the full article

Insiders (version 1.111)

Learn what is new in Visual Studio Code 1.111 (Insiders) Read the full article

05.03.2026 22:53 👍 0 🔁 0 💬 0 📌 0
Preview
Making agents practical for real-world development Explore agent orchestration, extensibility, and continuity in VS Code 1.110: lifecycle hooks, agent skills, session memory, and integrated browser tools. Read the full article

Making agents practical for real-world development

Explore agent orchestration, extensibility, and continuity in VS Code 1.110: lifecycle hooks, agent skills, session memory, and integrated browser tools. Read the full article

05.03.2026 22:53 👍 0 🔁 0 💬 0 📌 0
Preview
Copilot code review now runs on an agentic architecture Copilot code review now runs on an agentic tool-calling architecture and is generally available for all users with Copilot Pro, Copilot Pro+, Copilot Business, and Copilot Enterprise. For background, see… The post Copilot code review now runs on an agentic architecture appeared first on The GitHub Blog.

Copilot code review now runs on an agentic architecture

05.03.2026 21:33 👍 0 🔁 0 💬 0 📌 0
Preview
Hierarchy view improvements and file uploads in issue forms Hierarchy view improvements in GitHub Projects You now have several improvements to hierarchy view in GitHub Projects based on your feedback: Filter sub-issues: You can now filter sub-issues using syntax… The post Hierarchy view improvements and file uploads in issue forms appeared first on The GitHub Blog.

Hierarchy view improvements and file uploads in issue forms

05.03.2026 21:33 👍 0 🔁 0 💬 0 📌 0
Preview
Add images to agent sessions Quick start your agent session by starting from an image. Simply paste, drag, or click the image icon to wherever you work with agents on github.com (e.g., from the recently… The post Add images to agent sessions appeared first on The GitHub Blog.

Add images to agent sessions

05.03.2026 21:33 👍 0 🔁 0 💬 0 📌 0
Preview
Pick a model for @copilot in pull request comments You can ask Copilot coding agent to make changes in any pull request by mentioning @copilot. This works in pull requests created by Copilot and in pull requests created by… The post Pick a model for @copilot in pull request comments appeared first on The GitHub Blog.

Pick a model for @copilot in pull request comments

05.03.2026 21:33 👍 0 🔁 0 💬 0 📌 0
Preview
60 million Copilot code reviews and counting How Copilot code review helps teams keep up with AI-accelerated code changes. The post 60 million Copilot code reviews and counting appeared first on The GitHub Blog.

60 million Copilot code reviews and counting

05.03.2026 20:57 👍 0 🔁 0 💬 0 📌 0
Preview
Release v1.0 of the official MCP C# SDK Discover what’s new in the v1.0 release of the official MCP C# SDK, including enhanced authorization, richer metadata, and powerful patterns for tool calling and long-running requests. The post Release v1.0 of the official MCP C# SDK appeared first on .NET Blog.

Release v1.0 of the official MCP C# SDK

05.03.2026 18:45 👍 0 🔁 0 💬 0 📌 0