Trending

#MultiTenancy

Latest posts tagged with #MultiTenancy on Bluesky

Latest Top
Trending

Posts tagged #MultiTenancy

Preview
7 Advanced Mistakes Killing Your Laravel SaaS Hey Laravel community! Building SaaS isn’t just about shipping features — it’s a battlefield where tiny oversights turn promising MVPs into…

7 ADVANCED Laravel SaaS killers: tenant-bleed events, Stripe dupes, dying queues... Production fixes + code that scales 10k users. Don't rebuild my mistakes!
(codermanjeet.medium.com/7-advanced-m...)
#Laravel #SaaS #PHP #MultiTenancy #Scaling

0 0 1 0
Preview
PostgreSQL RLS в Go, Часть 2: Архитектура Highload. Паника, гонки и 10 000 партиций В первой части было разобрано, как настроить RLS в Go, почему is_local=true спасает от утечек в PgBouncer, и как покрыть это интеграционными тестами. Если вы еще не настроили базовую изоляцию, начните...

PostgreSQL RLS в Go, Часть 2: Архитектура Highload. Паника, гонки и 10 000 партиций В первой части было разобрано, как настро...

#Go #Golang #PostgreSQL #RLS #Multitenancy #Backend #Database #Security #архитектура #highload

Origin | Interest | Match

0 0 0 0
Preview
Как перестать писать WHERE tenant_id и отдать безопасность базе (PostgreSQL RLS в Go)? В одном из прошлых проектов случился «кошмар техлида»: в суматохе хотфикса было забыто добавление фильтра WHERE tenant_id = ? в одну из ручек API. В итоге один клиент увидел отчеты другого. Все быстро...

Как перестать писать WHERE tenant_id и отдать безопасность базе (PostgreSQL RLS в Go)? В одном из прошлых проектов случился ...

#Golang #PostgreSQL #RLS #Multitenancy #Backend #Testcontainers #Database #Security #Архитектура

Origin | Interest | Match

0 0 0 0
Preview
Atlassian Multi-Tenancy at Scale with TiDB: 3M+ Tables Atlassian collapsed hundreds of Postgres clusters into 16 TiDB clusters, scaling to 3M+ tables with multi-tenancy and zero-downtime upgrades.

3M+ tables. One platform.

Atlassian needed to support massive multi-tenant workloads with strong isolation and predictable performance. Sharded Postgres couldn't keep up.

Here’s how they use #TiDB to manage millions of tables at scale:
https://ow.ly/NP7G50XZI3r

#SaaS #MultiTenancy #DistributedSQL

1 0 0 0

I'm currently working on Sprout 2.0, which will have multi-database and domain support, as well as a whole host of additional supporting features!

#laravel #multitenancy

1 1 0 0
Preview
Why we also love n8n: Powerful, Flexible, but Not a Silver Bullet: Part 2 We continue our comparison of two leading integration platforms, Cyclr and n8n, to discover what each offers through multi-tenancy and embedding.

How do you scale #integrations once you’re serving many customers?

In Part 2 of our n8n series, we look at multi-tenancy, secure credential management, embedded delivery, and what changes when integrations become a product feature.

👉 cyclr.com/resources/em...

#Automation #SaaS #MultiTenancy

2 0 0 0
Preview
Qdrant 1.16: Tiered Multitenancy & ACORN for Vector Search | AI News Qdrant 1.16 introduces tiered multitenancy & ACORN for scalable, high-performance vector search. Upgrade now!

AIMindUpdate News!
Need to scale your vector database efficiently? Qdrant 1.16 introduces tiered multitenancy and ACORN for superior performance and search accuracy. #Qdrant #VectorDatabase #Multitenancy

Click here↓↓↓
aimindupdate.com/2025/12/02/s...

1 0 0 0
Video

Run all your tenants on a single cloud platform securely and with full control 🗄️🔑⚡ OpenNebula ensures isolation, permissions, and quotas for predictable, safe workloads.

#multitenancy #AIinfrustructure

0 0 0 0
Multitenancy Techniques for the UI in ASP.NET Core Multitenancy Techniques for the UI in ASP.NET Core

Multitenancy Techniques for the UI in ASP.NET Core
developmentwithadot.blogspot.com/2025/11/mult... #dotnet #aspnetcore #web #multitenancy

0 0 0 0
Multitenancy Techniques for ASP.NET Core Multitenancy Techniques for ASP.NET Core

Multitenancy Techniques for ASP.NET Core
developmentwithadot.blogspot.com/2025/11/mult... #dotnet #aspnetcore #web #multitenancy

0 0 0 0
Preview
AI Infrastructure Isn’t Limited By GPUs. It’s Limited By Multi-Tenancy. The latest AI Infrastructure 2025 survey shows that most organizations are struggling not due to GPU scarcity, but because of poor GPU utilization caused by limited multi-tenancy capabilities. Learn h...

The AI Infrastructure 2025 survey just dropped:
90% of teams cite cost/sharing as their top GPU blocker, not availability.
The bottleneck isn't hardware. It's multi-tenancy.

Full breakdown: vcluster.com/blog/ai-infr...

#Kubernetes #GPUs #MultiTenancy #PlatformEngineering #vCluster

1 0 0 0
Multitenancy Techniques for EF Core Multitenancy Techniques for EF Core

Multitenancy Techniques for EF Core developmentwithadot.blogspot.com/2025/11/mult... #dotnet #efcore #multitenancy

0 0 0 0
Post image

🚀 vCluster v0.29 is live!
Standalone vCluster is here → run Kubernetes without a host cluster.
Eliminate the host cluster dependency with a portable, scalable foundation.

🔗 www.vcluster.com/changelog
#Kubernetes #vCluster #CloudNative #MultiTenancy

1 0 0 0
Preview
Future of K8s Tenancy : vCluster v0.27 Private Nodes YouTube video by vCluster

🚀 Private Nodes are here, and we’re breaking it down live!
Run virtual clusters on dedicated infrastructure with full node-level isolation, without losing vCluster’s speed & flexibility.

Join the webinar👇
youtube.com/live/JOz_5iz...
#vCluster #MultiTenancy #CloudNative

1 1 0 1
Preview
How to Scale Kubernetes Without etcd Sharding Is your Kubernetes cluster slowing down under load? etcd doesn’t scale well with multi-tenancy or 30k+ objects. This blog shows how virtual clusters offer an easier, safer way to isolate tenants and s...

Hitting etcd limits as your Kubernetes clusters scale?

This blog breaks down why sharding isn’t the answer, and how virtual clusters offer isolated control planes without the complexity.

👉 www.loft.sh/blog/scale-k...

#vCluster #Kubernetes #etcd #DevOps #MultiTenancy #CloudNative

2 0 0 0
Preview
Three Tenancy Modes, One Platform: Rethinking Flexibility in Kubernetes Multi-Tenancy In this blog, we explore why covering the full Kubernetes tenancy spectrum is essential, and how vCluster’s upcoming Private Nodes feature introduces stronger isolation for teams running production, r...

Namespace isolation isn’t always enough.

In this post, @stmcallister.bsky.social breaks down why Private Nodes offer stronger boundaries for multi-tenant Kubernetes, without the overhead of managing dozens of clusters.

🔗 loft.sh/blog/why-pri...

#vCluster #PlatformEngineering #MultiTenancy

1 0 0 1
Post image

rom namespaces to node pools to separate clusters, each model has trade-offs.

This post breaks them down and explores how vCluster offers stronger isolation with lower overhead.
📖 loft.sh/blog/kuberne...
#Kubernetes #vCluster #MultiTenancy #DevOps #PlatformEngineering

1 0 0 0
Preview
Service Overrides: Core Concepts - Sprout - Multitenancy for Laravel Feature rich, flexible, and easy to use multitenancy package that integrates seamlessly with your Laravel application

After MUCH delay, I've finally completed the documentation for the service overrides that ship with Sprout.

#multitenancy #laravel

sprout.ollieread.com/docs/1.x/ser...

3 1 0 0
Preview
Kubernetes Multi-Tenancy: Considerations & Approaches What is Kubernetes multi-tenancy? Learn its key considerations, best practices, and three main approaches for secure implementation.

🏝️Ever pondered what happens when squabbling jellyfish govern a coral reef? Enter Kubernetes multi-tenancy! 🐙🤖 Manage multiple ‘teams’ in one cluster universe efficiently! #Kubernetes #MultiTenancy #CloudMagic

1 0 0 0
Post image

The Tenant Chronicles – Building a Multi-Tenant Todo App with Quarkus
Learn how to isolate user data and simplify CRUD logic with discriminator-based multi-tenancy in Quarkus and no boilerplate
buff.ly/UFJDTWm
#Java #Quarkus #MultiTenancy #Hibernate #REST

3 1 0 0

Significant concerns were raised about cross-shard queries, especially in multi-tenant setups. The discussion highlighted risks of data leaks and the need for explicit controls or 'friction' when breaking tenancy boundaries. #MultiTenancy 4/5

0 0 1 0
Preview
Amazon DynamoDB data modeling for Multi-Tenancy – Part 1 | Amazon Web Services In this series of posts, we walk through the process of creating a DynamoDB data model using an example multi-tenant application, a customer issue tracking service. The goal of this series is to explore...

📊📰 Amazon DynamoDB data modeling for Multi-Tenancy – Part 1

ift.tt/wBQ6PjZ

#aws #AmazonDynamoDB #MultiTenancy #DataModeling #CloudComputing #PerformanceOptimization

1 0 0 0
Preview
Streamlining Multi-Tenant Kubernetes: A Practical Implementation Guide for 2025 Let's face it: running multiple applications on separate clusters is a resource nightmare. If you've got different teams or customers needing isolated environments, you're probably spending way more on infrastructure than you need to. Multi-tenancy in Kubernetes offers a solution, but it comes with its own set of challenges. How do you ensure proper isolation? What about resource allocation? And the big one – security? This guide provides practical steps for implementing multi-tenant Kubernetes that actually works in production environments. By the end, you'll have a roadmap for consolidating your infrastructure while maintaining isolation where it matters. ## What Multi-Tenancy Actually Means in 2025 Multi-tenancy has become a bit of a buzzword, but at its core, it still means the same thing: multiple users sharing the same infrastructure. In Kubernetes, we typically see two flavors: 1. **Multiple teams within an organization** : Different departments or projects sharing a cluster, where team members have access through kubectl or GitOps controllers 2. **Multiple customer instances** : SaaS applications running customer workloads on shared infrastructure The key tradeoffs haven't changed much over the years, either. You're always balancing: * **Isolation** : Keeping tenants from accessing or messing with each other's resources * **Resource efficiency** : Maximizing hardware utilization and reducing costs * **Operational complexity** : Making sure your team can actually manage this setup What has changed are the tools and patterns. Pure namespace-based isolation is still common, but we've seen a shift toward more sophisticated approaches using hierarchical namespaces, virtual clusters, and service meshes. Let's start with the building blocks you'll need for a practical implementation. For more details about how the platform approaches multi-tenancy, check Kubernetes documentation. ## The Building Blocks: Practical Implementation Guide ### Namespace Configuration That Actually Works Namespaces are your first line of defense in multi-tenancy. Here's a modern namespace configuration with isolation in mind: apiVersion: v1 kind: Namespace metadata: name: tenant-a labels: tenant: tenant-a pod-security.kubernetes.io/enforce: baseline pod-security.kubernetes.io/audit: restricted pod-security.kubernetes.io/warn: restricted networking.k8s.io/isolation: enabled This does a few key things: * Creates a dedicated namespace for the tenant * Labels it for easier filtering and policy targeting * Applies Pod Security Standards (the modern replacement for Pod Security Policies) * Marks it for network isolation When organizing namespaces, many teams follow a pattern like `{tenant}-{environment}` (e.g., `marketing-dev`, `marketing-prod`). For SaaS applications, you might use customer IDs or similar identifiers. The key thing to remember: namespaces alone aren't enough for true isolation. They're just containers for resources – you need additional controls to enforce boundaries. ### RBAC That Actually Isolates Tenants Role-Based Access Control (RBAC) is essential for preventing tenants from accessing each other's resources. Here's a pattern that works well in practice: # Tenant admin role apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: tenant-a name: tenant-admin rules: - apiGroups: ["", "apps", "batch"] resources: ["pods", "services", "deployments", "jobs"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] - apiGroups: ["networking.k8s.io"] resources: ["ingresses"] verbs: ["get", "list", "watch", "create", "update", "patch"] - apiGroups: [""] resources: ["configmaps", "secrets"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] --- # Binding for tenant admin apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: tenant-a-admin-binding namespace: tenant-a subjects: - kind: User name: tenant-a-admin apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: tenant-admin apiGroup: rbac.authorization.k8s.io Notice a few important things here: * The role is scoped to a specific namespace (`tenant-a`) * It grants permissions for common resources but nothing cluster-wide * The binding associates a user with this role The pattern is simple but effective: create a set of standard roles for each tenant (admin, developer, viewer), each scoped to the tenant's namespace(s). One mistake I see teams make is being too generous with permissions. Start restrictive and loosen gradually as needed – it's much easier than trying to lock things down after a breach. ### Network Policies That Actually Isolate Traffic Network isolation is critical for multi-tenancy. By default, all pods in a Kubernetes cluster can talk to each other – not what you want in a multi-tenant environment. Here's a practical network policy that isolates tenant traffic: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: tenant-isolation namespace: tenant-a spec: podSelector: {} # Applies to all pods in namespace policyTypes: - Ingress - Egress ingress: - from: - namespaceSelector: matchLabels: tenant: tenant-a egress: - to: - namespaceSelector: matchLabels: tenant: tenant-a - to: - namespaceSelector: matchLabels: common-services: "true" This policy does two important things: * Allows ingress traffic only from the same tenant's namespace * Allows egress traffic only to the same tenant's namespace or to namespaces labeled as common services The second part is particularly important – your tenants probably need access to shared services like monitoring, logging, or databases. By labeling those namespaces as `common-services: "true"`, you create controlled exceptions to your isolation rules. A common mistake is forgetting about DNS and other cluster services. Make sure your network policies allow access to kube-system services that tenants need, or you'll have some very confusing debugging sessions. ### Resource Quotas to Prevent Noisy Neighbors One bad tenant can ruin the party for everyone by consuming all available resources. Resource quotas prevent this "noisy neighbor" problem: apiVersion: v1 kind: ResourceQuota metadata: name: tenant-a-quota namespace: tenant-a spec: hard: requests.cpu: "10" requests.memory: 20Gi limits.cpu: "20" limits.memory: 40Gi persistentvolumeclaims: "20" services: "30" count/deployments.apps: "25" count/statefulsets.apps: "10" This quota sets limits on: * CPU and memory consumption (both requests and limits) * Number of persistent volume claims (storage) * Number of services and workloads (deployments, statefulsets) Setting appropriate quota sizes takes some experimentation. Monitor actual usage patterns and adjust accordingly – too restrictive and legitimate workloads fail, too loose and you're back to the noisy neighbor problem. Pro tip: In addition to ResourceQuotas (which operate at namespace level), use LimitRanges to set default and maximum limits for individual containers. This prevents tenants from creating resource-hungry pods that still fit within their overall quota. ## Real-World Implementation Benefits Research and industry reports show clear benefits when organizations implement proper multi-tenancy in Kubernetes environments: According to documented implementations, organizations typically see: * 30-40% reduction in infrastructure costs by consolidating multiple single-tenant clusters * Significant decrease in time spent on cluster maintenance and updates * Improved resource utilization, often doubling from around 30-35% to 70% or more * Better standardization across development teams However, implementation isn't without challenges. Common issues include: 1. Resistance from teams concerned about workload security and isolation 2. Migration complexity for existing applications 3. Learning curve for new multi-tenant tooling and workflows 4. Special accommodations needed for resource-intensive or security-sensitive workloads This highlights an important point: multi-tenancy isn't all-or-nothing. Many successful implementations use a hybrid approach, keeping some high-security or high-performance workloads on dedicated clusters while consolidating standard workloads in shared environments. ## Solving the Big Three Challenges ### Challenge 1: Security Vulnerabilities Cross-tenant data leakage and escalation attacks are the nightmare scenarios in multi-tenant environments. Here's a practical security checklist: 1. **Enforce Pod Security Standards** : apiVersion: v1 kind: Namespace metadata: name: tenant-a labels: pod-security.kubernetes.io/enforce: restricted pod-security.kubernetes.io/enforce-version: v1.29 The "restricted" profile prevents pods from running as privileged, accessing host namespaces, or using dangerous capabilities. 1. **Isolate tenant storage** : Use StorageClasses with tenant-specific access controls, or better yet, separate storage backends for sensitive data. 2. **Implement regular security scanning** : Tools like Trivy, Falco, and Kube-bench can identify vulnerabilities in your multi-tenant setup. 3. **Audit, audit, audit** : Enable audit logging and regularly review access patterns – many breaches are detected through unusual access. ### Challenge 2: Resource Contention Even with resource quotas, you can still run into contention issues. Here are some practical solutions: 1. **Pod Priority and Preemption** : apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: tenant-high-priority value: 1000000 Assign different priority classes to tenant workloads based on their importance. 1. **Node Anti-Affinity** : affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: tenant operator: In values: - tenant-a topologyKey: "kubernetes.io/hostname" This prevents multiple pods from the same tenant being scheduled on the same node, distributing the load. 1. **Quality of Service Classes** : Set appropriate QoS classes (Guaranteed, Burstable, BestEffort) for different tenant workloads to influence how they're treated under resource pressure. ### Challenge 3: Operational Complexity Managing dozens or hundreds of tenants manually isn't feasible. Here's how to simplify operations: 1. **Automate tenant provisioning** : Create a standardized process for spinning up new tenant namespaces, applying policies, and setting quotas. 2. **Use a tenant operator** : Tools like Capsule or the Multi-Tenant Operator can handle tenant lifecycle management, from creation to termination: apiVersion: tenancy.stakater.com/v1alpha1 kind: Tenant metadata: name: tenant-a spec: owners: - name: tenant-a-admin kind: User namespaces: - tenant-a-dev - tenant-a-prod quota: hard: requests.cpu: '10' requests.memory: 20Gi resourcePooling: true namespacePrefix: tenant-a- 1. **Implement tenant-aware monitoring** : Tag all metrics and logs with tenant identifiers to simplify debugging and enable tenant-specific dashboards. 2. **Create self-service capabilities** : Build internal tools that let tenants manage their own resources within the constraints you define. ## Wrapping Up: Is Multi-Tenancy Right for You? Multi-tenant Kubernetes isn't a silver bullet, but it can significantly reduce costs and operational overhead when implemented correctly. Here's a quick checklist to decide if it's right for your organization: ✅ You have multiple teams or customers using similar infrastructure ✅ You're comfortable with the security implications of shared infrastructure ✅ You have the operational maturity to implement and maintain isolation ✅ The cost savings outweigh the increased complexity The implementation patterns we've covered – namespace isolation, RBAC, network policies, and resource quotas – provide a solid foundation for most multi-tenant environments. Start small, perhaps with just two teams or customers, and expand as you gain confidence in your isolation mechanisms. Remember, you don't have to go all-in on multi-tenancy. Many organizations use a hybrid approach, with shared clusters for most workloads and dedicated clusters for high-security or high-performance applications. Whatever approach you choose, make sure your teams understand the boundaries and limitations of your multi-tenant setup. Technical controls are important, but so is user education – a confused tenant can unintentionally cause problems for everyone. What's your experience with multi-tenant Kubernetes? Have you implemented any of these patterns, or do you have alternative approaches? Share your thoughts in the comments below.
0 0 0 0
Post image

Missed the #ArgoCD Projects Masterclass? 🐙

@christianh814.bsky.social breaks down how to structure AppProjects, set up RBAC, scope clusters and repos, and secure your GitOps workflows at scale. 🔄

Replay: buff.ly/wyHrvXf

#GitOps #Kubernetes #DevOps #MultiTenancy #CloudNative

2 0 0 0

I've made good headway on Bud and Terra, add-ons for @sprout.ollieread.com.

I've been working on tenant-specific database connections, mailers, logging, and auth providers. As well as tenant-specific domains, SSL generation and DNS verification.

#laravel #multitenancy

0 0 0 0
The logo for the Sprout - Terra package

The logo for the Sprout - Terra package

The third core add-on is Terra, which adds not only tenant-specific domain support, but a handful of supporting functionality for managing domains and SSLs.

It doesn't rely on Bud or Seedling.

#laravel #multitenancy

0 0 0 0
The logo for the Sprout - Seedling package

The logo for the Sprout - Seedling package

Once that is complete, Seedling can be finished.

Seedling comes with multi-database specific functionality, building on top of Bud's tenant-specific database-connections, by adding migration, seeding, and database creation support.

#laravel #multitenancy

0 0 1 0
The logo for the Sprout - Bud package

The logo for the Sprout - Bud package

The next part of Sprouts development will be the add-on Bud, which adds support for runtime resolved tenant-specific configuration.

It comes with implementations for:

- Auth Providers
- Database Connections
- Cache Stores
- Filesystem Disks
- and more

#laravel #multitenancy

1 1 1 0
Preview
Improve cost visibility of an Amazon RDS multi-tenant instance with Performance Insights and Amazon Athena | Amazon Web Services In this post we introduce a solution that addresses a common challenge faced by many customers: managing costs in multi-tenant applications, particularly for shared databases in Amazon Relational Database...

📊📰 Improve cost visibility of an Amazon RDS multi-tenant instance with Performance Insights and Amazon Athena

buff.ly/hHnIwtt

#aws #Multitenancy #AWS #RDS #CostManagement #PerformanceInsights

0 0 0 0
Multi Tenancy in Apache Hop and Putki
Multi Tenancy in Apache Hop and Putki YouTube video by know.bi

🚀 Multi-tenancy in Apache Hop & Putki! Managing multiple customers in a shared infrastructure? Explore sharding, striping & hybrid models to balance security, scalability & cost.
Check the video youtube.com/watch?v=F_2e...

#apachehop #multitenancy #putki #datasky #databs

3 2 1 0