Welcome to the second part of the "Mastering Dapr Inner Development Loop" series. If you missed the first part, be sure to check it out here.
There is nothing more satisfying than seeing your development work come together, allowing you to start up an application and listen on the desired port. This proves the application dependencies, configurations, and code work together.
Running a Single Application
Once you have sufficient functionality and configuration within your application, the next step is to ensure it all works as expected. Running the application locally verifies that all modules, dependency libraries, frameworks, and configurations are properly integrated. During startup the application may attempt to connect to external services like databases or message brokers, validating system configurations, security settings, and network connectivity. To start up your application, you need these backing services, and Dapr is configured correctly to work with your app and its backing infrastructure services.
The Dapr runtime acts as a hermetic black box between your apps and the backing services. While previous steps involved using Dapr APIs through various SDKs, in this phase, you will configure Dapr using flags and configuration files. Dapr Configuration is done through environment variables and YAML files, streamlining later deployment to Kubernetes. The fastest way to set up a local Dapr development environment is through the Dapr CLI. The CLI will install Dapr locally, spin up a Redis instance running in the background as a generic backing service, and place default configuration files that point to the Redis instance. With these prerequisites on your local machine, you can build your application (eg. for the sample project we use, that is: mvn clean install), and then run it along with an associated Dapr process in a single step using the Dapr CLI:
dapr run --app-id publisher --app-port 5001 --resources-path ./common/local -- java -jar publisher/target/Publisher-0.0.1-SNAPSHOT.jar
Dapr CLI command to start-up both the publisher app and the associated Dapr process
The Dapr CLI run command has flags to configure some of the Dapr options. The above command will run the Publisher app, and start a Dapr process configured with the application port number and the directory with configuration files. This leaves only coming up with Dapr YAML files for component definitions, configurations, resiliency policy, etc. The common way to do that is by starting with an existing yaml file (such as the default ones that come with Dapr) and updating it by following the Dapr documentation (such as this Redis PubSub reference) until the desired combination is discovered. A faster and error-free way is through Conductor Free, which offers a Component Builder wizard for configuring components. The Component Builder will guide you through the steps of configuring your desired Dapr component type whether it is for pubsub, state, binding, or others and allow you to choose the security profile, access scopes, and additional advanced configuration options. Use the in-browser web interface to come up with the desired Component specification and download a copy for your application to configure the Dapr process using the `--resources-path` command as shown above. While Conductor is designed for developing and operating Daprized applications, the Component Builder can be used without a Kubernetes cluster and help you quickly configure Dapr resource specifications.
You can also use other tools to run your application alongside Dapr, available for developing in different languages, including Java TestContainers module for Dapr and .Net Aspire. These tools help validate your application configuration during development and integration testing. For a comprehensive list of Dapr development tools, check out this post.
Running Multiple Applications Simultaneously
Sometimes running a single application might not be sufficient for the validation you are looking to perform. You may want to run multiple applications, such as publisher and subscriber apps interacting via a message broker, or two applications communicating synchronously through service invocation. You can follow the example above and run each application with its Dapr process on a different terminal, but that can quickly get out of hand with multiple directory paths, and port conflicts. To run multiple applications with Dapr, you can use the Dapr CLI too. The command dapr run -f dapr.yaml starts multiple applications simultaneously, each with its own Dapr process, by specifying the configuration in the YAML file. This approach eliminates the need for Docker Compose, providing a streamlined way to start up multiple applications with a single command. For example, a dapr.yaml file to run publisher and subscriber Java applications:
version: 1
common:
resourcesPath: ./common/local
env:
DEBUG: true
apps:
- appID: publisher
appDirPath: ./publisher
appPort: 5001
command: ["java", "-jar", "target/Publisher-0.0.1-SNAPSHOT.jar"]
- appID: subscriber
appDirPath: ./subscriber
appPort: 5002
command: ["java", "-jar", "target/Subscriber-0.0.1-SNAPSHOT.jar"]
By using this setup, you can ensure that both applications start with their respective Dapr processes, making it easy to validate interactions through pub/sub messaging or service invocation and streamline your development workflow.
Validating Application Interactions
Once your applications are running, the next essential step is to validate their interactions. This involves manually testing the endpoints and verifying that the components communicate as expected. You can achieve this by using tools like curl, Postman, or a REST client within your IDE to interact with your applications. Additionally, Dapr APIs can be accessed via the Dapr CLI or the VS Code Dapr plugin to save state, publish messages, and perform service calls. For example, to validate that a consumer application is connected to a message broker and processes incoming messages as expected, you can publish a message to the broker with the following command:
dapr publish --publish-app-id publisher --pubsub pubsub --topic orders --data '{"orderId": "123"}'
This command will act as the publisher application and publish a message to the pubsub broker and on the `orders` topic. A successful log message in the subscriber app indicates that its messaging set-up is correctly configured andcan process the incoming messages.
Integration testing focuses on the interaction between microservices or between a microservice and external services such as databases or messaging systems. It ensures all components work harmoniously together. For example, testing that a producer service correctly publishes messages that a consumer service then processes. To trigger the end-to-end flow of this application, we can manually perform can all to the publisher application and make it publish a message with the following command:
curl -X POST http://localhost:5001/pubsub/orders \
-H "Content-Type: application/json" \
-d @order.json
This helps verify that the application endpoints are accessible and behaving correctly. By checking the logs of the Subscriber application, you also verify that it can connect to the message broker and consume the message and process it.
In this phase, we are performing quick and dirty manual integration and contract testing to get a fast confirmation of how the distributed application is interacting. Successfully completing this step will get you the "It Runs on My Machine" milestone. It demonstrates that component tests are passing, dependencies and configurations are correct, and the application is functional in your local environment. Proper integration and contract testing must be included and automated in the CI/CD pipeline, but for now, we will shift our focus to validation on Kubernetes.
Now read part 3 of this series, where we will dive into deploying applications to a Kubernetes cluster.