Dapr Deployment Models

Dapr is best-known for running as a microservices sidecar in Kubernetes environments, but the powerful APIs it offers extend far beyond this single deployment model. In this exploration, we'll delve into the various deployment models of Dapr, examining their primary drivers, advantages, and limitations.

Dapr Deployment Models

Dapr is best-known for running as a microservices sidecar in Kubernetes environments, but the powerful APIs it offers extend far beyond this single deployment model. As a portable, multifaceted toolkit, Dapr has made its mark across various platforms, from VMs and edge deployments to acting as a shared service in Knative functions, and even on serverless cloud platforms like Azure Container Apps and Diagrid Catalyst. With its unified programming model and ability to adapt to diverse operational needs, Dapr is evolving from a sidecar into serverless APIs. In this exploration, we'll delve into the various deployment models of Dapr, examining their primary drivers, advantages, and limitations.

Dapr on a VM

In its simplest form, Dapr runs as a single process and does not depend on containers or Kubernetes. This allows Dapr to operate efficiently on edge devices and integrate seamlessly into local development environments, requiring minimal infrastructure and external dependencies.

Dapr without containers

Interestingly, the latest State of Dapr Report indicates that about 25% of Dapr users operate outside of Kubernetes environments. The primary applications for this deployment model include developer machines with restrictions and edge devices with limited hardware. An exemplary use case is an edge application with limited capabilities leveraging a local Dapr instance over HTTP. This setup enhances the application's ability to reliably transmit data to various remote cloud APIs. Additionally, Dapr deployments in this mode can be executed offline or air-gapped environments, adding to its flexibility.

Dapr with containers

Most developers today are well-acquainted with containers and often have platforms like Docker or Podman on their local machines. Dapr is compatible with these platforms, making them a popular choice for development. When initializing Dapr on a machine with a functioning container runtime, the self hosted install includes a Redis container, which provides a local key/value store and pub/sub component, the Dapr placement service (used for managing Dapr actors running locally), and a Zipkin container for capturing and visualizing traces. These three containers - Dapr placement, Redis, and Zipkin - create a fully configured Dapr development environment. When executing local Dapr CLI run commands, Dapr is launched as a local container alongside each application. Primarily used for local development, this setup offers a quick, lightweight solution with all necessary backing services already in place and ready to use.

Dapr on Kubernetes

Dapr is compatible with a wide range of Kubernetes distributions, from development-oriented tools like KiND and Minikube to standard Kubernetes and managed services like AKS, EKS, GKE, and Red Hat Openshift.

Deployment to Kubernetes is the current preference for production environments and incorporates several key control plane components:

  • dapr operator: Manages updates to Dapr components and tracks Kubernetes service endpoints (like state stores and pub/subs).
  • dapr sidecar-injector: Injects the Dapr sidecar into application Deployment pods that have the necessary annotations.
  • dapr placement: Specific to actors; it creates mappings that link actor instances with specific application pods.
  • dapr sentry: Handles mutual TLS between applications and control plane services acting  as a certificate authority.

While the Dapr control plane facilitates automation and sidecar management, it also introduces its own operational challenges and complexities, particularly as it expands into multiple Kubernetes clusters within an organization. These challenges include tasks like control plane and data updates, certificate renewals, adherence to Dapr best practices, and monitoring and resolving issues. To address these operational complexities, we at Diagrid developed Conductor. Conductor simplifies the operation of Dapr in production Kubernetes environments, reducing downtime and security incidents by automating key operational tasks.

The Dapr control plane is integral in automating sidecar management within a Kubernetes cluster, simplifying tasks like service discovery and integration with Kubernetes secrets. These elements comprise the Dapr control plane, but the data plane, consisting of sidecars, is flexible in its deployment options as we will see next.

Dapr as a sidecar

In Kubernetes environments, the most common deployment mode for the Dapr data plane is the application sidecar model. This setup is commonly used for both stateless and stateful long-running applications, where each application is managed as a Kubernetes Deployment or StatefulSet, and the Dapr sidecar container is injected into every pod. This deployment mode ensures a one-to-one mapping between each application instance and its corresponding Dapr instance. This architecture offers several key benefits:

  • Resource isolation: Each application instance and its sidecar operate independently, minimizing the risk of resource contention.
  • Lifecycle decoupling: Changes or disruptions in one instance do not impact others, ensuring stability across the application.
  • Enhanced security: Each Dapr instance provides a distinct security boundary, reducing the risk of widespread issues in the event of a security breach.

While there are some drawbacks to bear in mind, they are typically manageable in real-world applications. For instance, upgrades to the Dapr sidecar require pod restarts, a process that is automated within Kubernetes environments. Also, the issue of startup order of containers in a pod is being addressed through Kubernetes native containers features.

Despite these considerations, the benefits of the sidecar model significantly outweigh the drawbacks, making it the recommended production deployment model. Its ability to isolate resources, decouple lifecycles, and provide strong security boundaries greatly enhances overall application reliability.

Shared Dapr instance

In situations where the tight isolation provided by the sidecar model isn't crucial, but quick startup, efficient shutdown, and resource optimization are priorities, the Shared Dapr Deployment mode emerges as an attractive option. This relatively new feature (still in sandbox) enables multiple applications to utilize a single Dapr instance on the same node. Rather than injecting Dapr into each application pod, the Dapr data plane is deployed as a DaemonSet or a standalone Deployment within Kubernetes. This mode's separation from the application pods offers a variety of benefits:

  • Faster startup times: Applications are operational immediately upon startup, as the Dapr daemon is already running as DaemonSet or a separate Deployment. This is particularly advantageous for quick-executing, small compute units like those found in function models such as Knative and OpenFunction.
  • Reduced resource usage: Applications that can share minimal CPU/memory resources, like Functions or those on resource-constrained edge devices, benefit from cost savings due to shared, rather than reserved Dapr resources.
  • No App restart on Dapr upgrade: Upgrading Dapr doesn't necessitate restarting the entire pod, minimizing disruption. Nonetheless, shared Dapr users might encounter operational disruptions during upgrades.

Despite these advantages, there are significant trade-offs:

  • Shared downtime: If the shared Dapr instance fails, all dependent apps face service disruptions, whether due to runtime issues, upgrades, downgrades, or evictions.
  • Noisy neighbor: In shared mode, a spike in demand from one app can strain shared resources, impacting other tenants. Memory leaks or high usage affect the entire node, not just a single pod.
  • Manual scaling: Unlike the sidecar model where Dapr and the application scale together, in shared mode, scaling up applications can lead to a mismatch in the number of Dapr instances available, exacerbating 'noisy neighbor' issues.

This deployment model is particularly tailored for short-lived applications that experience rapid and frequent scaling, such as those found in Function-as-a-Service (FaaS) scenarios. While it offers benefits like faster startup times and reduced resource consumption, it's important to acknowledge significant trade-offs in terms of security and reliability. Furthermore, this model is currently an experimental sandbox feature and has not been extensively adopted in production environments. This status should be considered when evaluating its suitability for critical applications.

Short-lived Dapr

In the context of short-lived Jobs in Kubernetes, Dapr can be adeptly configured to run as a sidecar container alongside the main application container. This configuration ensures that Dapr initiates and terminates synchronously with the Job. Throughout the job's duration, Dapr offers crucial services such as state management and messaging, tailored specifically to the needs and lifecycle of the short-lived job.

Serverless Dapr

As the Dapr APIs (the so-called building blocks in Dapr terminology), and the components that support them have grown, so too have the ways Dapr can be deployed. In a VM setup, configuring and running Dapr sidecars and the control plane components is entirely manual. On Kubernetes, the Dapr control plane and Diagrid Conductor handle many operational aspects. This leads to an intriguing possibility: what if the management of both the Dapr control plane and data plane could be entirely offloaded, transforming Dapr into a purely API-driven service? This idea is at the heart of serverless Dapr.

Dapr deployment models evolution

The evolution of Dapr's deployment models reflects a significant shift. From the sidecar approach where Dapr is closely tied to each application instance's lifecycle, to a shared model where Dapr is distributed across multiple application instances, and finally, to a model where Dapr operates externally to Kubernetes, Dapr is increasingly becoming an API with offloaded operational concerns. Next, we'll explore two serverless Dapr offerings available today.

Azure Container Apps

Azure Container Apps (ACA) provides a serverless hosting solution that removes the complexities of managing VMs, Kubernetes, and other cloud infrastructure components. Within ACA, applications can be deployed with Dapr integration, ensuring that a Dapr sidecar runs in tandem with the app, managed behind the scenes. This setup shifts the responsibility of managing both the application and its Dapr sidecar to Azure. While this Dapr-enabled service is highly effective for deep integration with Azure's ecosystem, its usage is limited to ACA and cannot be used from other application runtimes or cloud infrastructure. With Diagrid Catalyst Dapr evolves into a fully serverless API, accessible by any application from any cloud environment.

Diagrid Catalyst

Catalyst takes the Dapr to a new level, offering its capabilities as serverless Dapr compatible APIs, completely independent from specific application compute, programing language, runtime, and cloud environment. For those already familiar with Dapr, Catalyst can be described as a set of remote, serverless sidecars that are not bound to any infrastructure platforms. This flexibility opens up Dapr's capabilities to a wide range of applications, from edge clients to functions, to container services and monolithic systems running on VMs. Developers can easily access these serverless Dapr compatible APIs and capabilities; all that is required is a Catalyst project URL and an API token to configure the Dapr SDK (or any HTTP client). This setup allows developers to get up and running with Dapr in no time. Catalyst takes on the responsibility of operating, maintaining, and scaling these APIs, along with any necessary backing infrastructure needed for workflows, pub/sub, key/value storage, secret management, and observability, among others.

Diagrid Catalyst application development options

One of the unique aspects of Catalyst is its ability to offer the flexibility of integrating diverse backing infrastructures for these APIs. Users have the choice to either rely on Diagrid-managed infrastructure or to connect their own infrastructure hosted with external cloud providers. At the same time, Catalyst addresses concerns about vendor lock-in by enabling the export of configurations into Dapr resource specifications. This feature allows for a smooth transition between Catalyst and open-source Dapr, supporting deployments in private Kubernetes clusters or other platforms. This dual compatibility streamlines the process of importing resources to Catalyst and exporting them back into Dapr, where applicable.

Catalyst reimagines Dapr as a serverless API, independent of the application's compute platform, lifecycle, and resource demands. With its unique service-twin model, Catalyst offers application-centric security and resource isolation, along with comprehensive observability and management features per application. To experience these benefits firsthand, join the early-access program.

Summary

Dapr, originally designed as a sidecar for Kubernetes microservices, has significantly evolved to align with various architectural styles. Its transition from a model closely tied to each Kubernetes application pod to the shared Dapr model signifies a move towards greater lifecycle flexibility and resource efficiency. The introduction of serverless Dapr APIs through Catalyst marks a further evolution, allowing Dapr to extend beyond specific application platforms and cloud coupling. This advancement frees organizations from operational burdens and the risk of vendor lock-in.

Pros and cons of Dapr deployment options

Just as Dapr APIs offer flexibility and portability across different programming languages for developers, the range of Dapr deployment options gives operations teams the ability to tackle various operational challenges. This dual flexibility ensures consistent development practices while shifting operational requirements to cloud-based Dapr services. For developers, Dapr is no longer a sidecar, but a set of polyglot patterns accessible via APIs, where deployment choices are left to operational teams. Ultimately, Dapr is a framework that provides both versatile APIs and a variety of deployment options, addressing diverse technical requirements.

Do you want to learn more about Dapr? Join the Dapr Discord where thousands of developers are coming together to ask questions and provide feedback. For organizations considering or already implementing Dapr in production settings, Diagrid Conductor provides unparalleled capabilities for operating Dapr efficiently and reliably in production. And finally, If you're not using Kubernetes but wish to leverage the powerful Dapr APIs, consider exploring Diagrid Catalyst, which offers Dapr API in a serverless fashion.

Diagrid Newsletter

Subscribe today for exclusive Dapr insights