Skip to content

Blog

Configuration Hell? How BigConfig Tames the Modern Dev Environment

rama-jdbc

Setting up a local development environment today is rarely a trivial matter. The days of simply git clone and npm install are long gone. Modern architectures, particularly those embracing microservices, polyglot persistence, and cloud-native practices, have turned the humble setup process into a multi-layered nightmare.

If you’ve ever spent an afternoon debugging why your local database port clashes with your integration environment, or wrestled with five different tools requiring three different credential formats, you know the pain.

Let’s dive into a concrete example — a complex but typical setup — and see how BigConfig transforms this chaos into an automated, zero-cost development experience.

The Configuration Challenge: A Deep Dive into Rama JDBC

Section titled “The Configuration Challenge: A Deep Dive into Rama JDBC”

In our case, we’re configuring the rama-jdbc development environment. Rama JDBC implements the robust Outbox Pattern using SQL triggers. This already introduces a host of configuration demands:

  1. Cloud-Native Credentials: Connecting to our staging AWS RDS instance to introspect and replicate the target schema requires fetching credentials securely from AWS Secrets Manager.
  2. Polyglot Tooling: Our system is a mix of languages and utilities:
    • Backend services are in Go, using sql-migrate for database migrations.
    • Automation and utility scripts are implemented using Babashka and the Just task runner.
  3. Local Database Mirroring: To ensure parity, we need a local SQL database. We use Process Compose to manage its lifecycle, which needs distinct configurations for dev and test.
  4. SQL UI Pain Points: For schema inspection, we use the DuckDB UI. Critically, DuckDB only supports connection via an initialization SQL file, not environment variables.
  5. Conflicting Formats: We have tools that require a simple host and port, while others demand a full JDBC URL.

The configuration complexity peaks when dealing with TCP ports. To manage multiple concurrent development contexts (e.g., a Dev feature branch and a Test branch), we use Git Worktrees.

This means ports cannot be the default ones; they must be dynamic but deterministic, dependent on:

  • The service name.
  • The path of the current working directory.

In total, this setup requires managing five dynamic ports:

  • SQL Server (Dev & Test instances)
  • Process Compose (Dev & Test instances)
  • DuckDB UI

Manually managing and passing these five dynamic ports and all the various configuration formats (JDBC URL, host/port, SQL init file) across Go, Babashka, Just, DuckDB, and the main Clojure application is a recipe for errors and lost development time.

The BigConfig Solution: configuration as code

Section titled “The BigConfig Solution: configuration as code”

This is where BigConfig steps in. By leveraging a single, centralized configuration system written in Clojure, we can define the logic for all these variables, ensuring consistency across every tool.

BigConfig allows us to define a master configuration where:

  1. Port Calculation Logic is Centralized: A single Clojure function can take the (service-name path) as input and deterministically calculate the correct TCP port for the current worktree.
  2. Format Transformation is Automated: The same central configuration can:
    • Calculate the host and port.
    • Automatically assemble the JDBC URL for Clojure tools.
    • Generate the specific SQL initialization file required by DuckDB.
  3. Secure Credentials Injection: Logic to securely fetch credentials from AWS Secrets Manager is handled once, and the resulting secrets are injected into the necessary configurations for both the Go migrations and the local server.

The biggest win is the developer experience. The entire end-to-end setup, which was a “nightmare” of manual steps, is now reduced to a single, idempotent command:

Terminal window
bb build

The Babashka task runner uses BigConfig to coordinate everything: calculating ports, generating configuration files, and ensuring full parity between the local development container and the GitHub CI Runner. The “build” step becomes a zero-cost operation that simply ensures your environment is perfectly configured and ready to go.

Ultimately, development productivity is measured by the speed of the feedback loop. Even with a complex, port-heavy environment, the Clojure REPL workflow remains lightning-fast.

Most of the time, development is spent hot-reloading code:

  • Change code → Evaluate expression → Inspect result. This takes milliseconds.

Crucially, when a database change is necessary, BigConfig’s automation keeps the environment responsive:

  • Need a new migration → Rebuild the database. This, too, takes milliseconds, thanks to the deterministic and automated environment setup.

BigConfig doesn’t just manage complexity; it preserves the core advantage of Clojure — the immediate, fluid feedback loop of the REPL — even when faced with the configuration hurdles of a complex, modern, polyglot application. It allows you to focus on the code, not the configuration.

You can verify the developer experience described here. Just clone rama-jdbc inside big-container .

Terminal window
docker run -it --rm ghcr.io/amiorin/big-container
git clone https://github.com/amiorin/rama-jdbc workspaces/rama-jdbc
cd workspaces/rama-jdbc
# First time will take minutes and devenv progress is not showed in the terminal.
bb build
# Second time will take milliseconds
time bb build
# Start postgres and run the migrations for the dev environment
just pc-dev &
# Run the test for the test environment
clojure -M:shared:test

Would you like to have a follow-up on this topic? What are your thoughts? I’d love to hear your experiences.

A New Approach to Dotfiles management with BigConfig

dotfiles

Managing dotfiles—the configuration files that personalize your user environment—is a crucial part of a developer’s workflow. The go-to tools for this have long been Chezmoi and Stow . While Stow is celebrated for its simplicity, Chezmoi offers powerful templating and secret management. However, what if you need the best of both worlds? This is where BigConfig comes in, offering a new way to manage your configurations by combining the simplicity of a declarative approach with the power of code.

[user]
email = 32617+amiorin@users.noreply.github.com
name = Alberto Miorin
[pull]
ff = only
rebase = true
[init]
defaultBranch = main
{%- if profile = "macos" %}
[url "https://{{ "GITHUB_TOKEN" | lookup-env }}:x-oauth-basic@github.com/"]
insteadOf = https://github.com/
{%- endif %}

The developer experience with BigConfig is centered around Babashka tasks, making it feel like a standard Clojure project. You get a clear set of commands:

Terminal window
bb install -p [macos|ubuntu]
bb diff -p [macos|ubuntu]
bb render -p [macos|ubuntu|all]

The diff command is particularly useful, allowing you to compare your current dotfiles against the rendered versions before installing them. This gives you a clear, human-readable way to see exactly what’s about to change.

By keeping the code and the dotfiles in the same repository, BigConfig provides a cohesive and powerful developer experience.

Like many developers, I use both macOS and Ubuntu, and the dotfiles for these environments are similar but not identical. Some configurations only exist on one platform, while others need slight tweaks. Stow, while easy for symlinking, lacks the flexibility for this kind of conditional logic. Chezmoi, on the other hand, embeds its logic within the filename, which can become unwieldy and less transparent as your needs grow.

This complexity led me to seek a solution that treats dotfile management as an automation problem—one that’s best solved with code.

BigConfig takes a different approach. Instead of embedding logic in filenames, it uses a data structure to define the rendering pipeline for your dotfiles. This means your configuration files are kept clean, and the logic for how they are applied is externalized in a dedicated file.

The core of this system is a two-stage rendering process:

  • Stage 1: Common dotfiles are merged with platform-specific ones (e.g., common and macos are merged into resources/stage-2/macos). At this stage, secrets and tokens are not yet resolved.
  • Stage 2: The merged files from Stage 1 are rendered, and all secrets and environment variables are resolved, creating the final configuration in a dist/ directory. This is the directory used for installation and comparison.

This structure allows you to maintain a clean separation of concerns: your source files (resources/stage-1) are never committed with secrets, and the final rendered output (dist/) is never committed at all. Your private information is stored securely in an .envrc.private file, which is kept out of your Git repository.

BigConfig’s rendering logic is defined in a Clojure data structure. Here’s a snippet that shows how this two-step process is defined:

[{:template "stage-1"
:target-dir (format "resources/stage-2/%s" profile)
:overwrite :delete
:transform [["common"
:raw]
["{{ profile }}"
:raw]]}
{:template "stage-2"
:target-dir dir
:overwrite :delete
:transform [["{{ profile }}"]]}]

This data structure is a clear “recipe” for how your dotfiles should be built. It tells BigConfig:

  • :template: The source directory for the files.
  • :target-dir: The destination directory.
  • :overwrite :delete: Ensure the target is clean before rendering.
  • :transform: The core logic for copying and rendering files. For example, ["common" :raw] copies the contents of the common directory without treating them as templates, while ["{{ profile }}"] copies the contents of the macos or ubuntu directory and treats the files within as templates, resolving variables and secrets.

This declarative approach makes the process transparent and easy to debug.

If you want to adopt BigConfig for your dotfiles, just follow this tutorial or have a look at mine

It might seem excessive to learn Clojure and a tool like BigConfig just to manage dotfiles, but the real value comes from applying that investment across a broader range of automation tasks. Clojure’s strengths—its functional nature, immutability, and powerful data manipulation capabilities—make it a strong choice for configuration-as-code. By using a single, cohesive language, you can unify your automation stack and create a more maintainable, expressive system.

Clojure’s core design principles make it an excellent fit for complex configuration tasks. Unlike static languages or rigid data formats like YAML or JSON, Clojure’s Lisp-based syntax treats code as data. This allows you to programmatically generate and manipulate configuration files with functions, macros, and conditional logic.

For example, instead of manually copying and pasting large YAML blocks across multiple environments, you could define a single function that takes environment-specific parameters (e.g., development, staging, production) and generates the correct configuration for each. This reduces redundancy and the risk of manual errors.

While managing dotfiles is a great starting point, the true return on investment for learning Clojure and a tool like BigConfig lies in its applicability to other areas of the software development lifecycle.

A unified configuration system can automate the setup of a new developer’s machine. Instead of relying on a multi-step, error-prone manual process, you can use Clojure to define a single script that:

  • Installs necessary dependencies (e.g., Homebrew packages, language runtimes).
  • Clones required repositories.
  • Configures local databases, environment variables, and services.
  • Sets up the development environment, including IDE settings and editor configurations.

GitHub Actions workflows are defined in YAML, which can become unwieldy and difficult to manage as they grow in complexity. By using a tool that integrates Clojure, you can dynamically generate these workflow files. This allows you to:

  • Use a single source of truth for your build, test, and deploy steps.
  • Parameterize workflows to run across different platforms or branches.
  • Create reusable, composable functions to define common CI/CD patterns, making your pipelines more DRY (Don’t Repeat Yourself).

Infrastructure as Code (IaC) with Terraform and Ansible

Section titled “Infrastructure as Code (IaC) with Terraform and Ansible”

The modern approach to managing cloud infrastructure is through code, but traditional IaC tools like Terraform and Ansible have their own configuration languages (HCL and YAML, respectively). While powerful, these languages can lack the full expressiveness of a general-purpose programming language. Using Clojure, you can:

  • Generate Terraform HCL files: Create complex Terraform configurations for large-scale cloud deployments, such as a Kubernetes cluster, by leveraging Clojure’s data manipulation capabilities.
  • Create dynamic Ansible playbooks: Instead of a static playbook, you can write Clojure code that generates an Ansible playbook based on dynamic inputs or the state of your infrastructure. This is particularly useful for provisioning environments that vary slightly from one another.

Kubernetes configuration is notoriously verbose, with extensive YAML manifests for deployments, services, and ingresses. By using Clojure as a configuration generator, you can simplify this process by:

  • Templating YAML manifests: Define a base template in Clojure and generate multiple, consistent Kubernetes manifests from it.
  • Automating cluster deployments: Use a single script to deploy an entire application stack, from pods and services to persistent volumes and secrets.
  • Enforcing best practices: Embed validation and sanity checks within your Clojure code to ensure all generated manifests adhere to your organization’s standards before deployment.

Stow and Chezmoi are great, but for those with complex, multi-platform needs, they can fall short. BigConfig doesn’t try to hide the underlying logic; it embraces it. It recognizes that managing dotfiles is an automation task that benefits from explicit, readable code. Just as Astro , a modern static site generator, has gained popularity over tools like Hugo by being more transparent and flexible, BigConfig offers a similar paradigm shift for dotfile management.

Ultimately, your dotfiles are part of your automation workflow. Shouldn’t your tool for managing them be as powerful and flexible as the rest of your toolchain? BigConfig says yes.

Would you like to have a follow-up on this topic? What are your thoughts? I’d love to hear your experiences.

Why Ansible Still Rules for Your Dev Environment

ansible

Back in the day, before Red Hat acquired Ansible, I was using it to provision Cloudera clusters in massive data centers. And let me tell you, its killer feature wasn’t some complex, enterprise-grade capability. It was pure simplicity.

You just needed SSH, and you were ready to go. The feedback loop was in seconds—a refreshing change from the slow, manual processes we were used to. It was a DevOps dream.

Then came Docker. For many use cases, containers were the new king. They offered a more lightweight, portable solution for shipping applications. And for a while, it seemed like Ansible might get relegated to the history books.

But not so fast. While Docker took over for application deployment, Ansible found its true calling: provisioning the remote development environment.

Remote development has gone mainstream. Whether you’re using a GUI or a terminal, working on a remote machine makes you more productive. It’s not just a trend; it’s a fundamental shift.

Think about it:

  • No more “it works on my computer!” Everyone’s environment is the same. No more chasing down dependency hell.
  • No more wasted time. Your environment is always up to date. The days of git pull followed by hours of fixing a broken dev setup are gone.
  • More resources, less cost. Remote machines can be shared, giving you access to powerful hardware for a fraction of the price.
  • Easy authentication. With SSH agent forwarding, pulling and pushing changes to your code repositories is seamless.

It’s a developer’s paradise. But there’s a catch…

Ansible is fantastic for this, but its configuration language—YAML—has a few pain points:

  • It’s another language to master. You have to learn the specific syntax and structure, which can feel like a steep climb on top of everything else you already know.
  • It’s not flexible enough. Manually curating dozens of YAML files—like packages.yml, repos.yml, and ssh-config.yml—can be tedious and error-prone. The more complex your environment, the messier it gets.

Wouldn’t it be great if you could just write code to generate your configurations?

Introducing BigConfig: The Code-First Approach

Section titled “Introducing BigConfig: The Code-First Approach”

This is where a tool like BigConfig comes in. Imagine a world where you write a simple script to generate your Ansible inventory and playbook files. No more manual YAML curation. You can leverage the power of a real programming language to create dynamic configurations.

Here’s the secret: JSON is valid YAML.

This simple fact allows us to generate JSON files with a .yml extension. BigConfig can take your code and spit out a perfect, machine-readable Ansible configuration.

The next logical step? Making the provisioning of your remote development environment an API.

You could simply send a request to a service, and it would provision a new, perfect environment for a new team member in minutes. BigConfig can handle this too, turning a manual, file-based process into a programmable, scalable service.

While some tools come and go, others adapt and find their niche. For provisioning remote development environments, Ansible remains a powerhouse. It just needs a little help from the next generation of tools to unleash its full potential.

This project is specific to my setup but it can be forked and adapted to your needs. I use an iMac and two minipc (soyo and firebat) to develop in the terminal using ssh and tailscale so that I can also code when I am on the road with my Macbook Pro. The ansible project provisions multiple users on both minipc. I use nix and devenv.

https://github.com/amiorin/dotfiles-v3/tree/ansible

Any BigConfig module is also a Clojure artifact and here you can see that I can use the BigConfig module with a complete different configuration.

  • bb.edn require the BigConfig module
  • Directorybb
    • config.clj it provides a different configuration map to the BigConfig module

https://github.com/amiorin/dotfiles-v3/tree/babashka

Imagine that you have to provision hundreds of user per machine. It will take too long but if every host becomes the combination of the user + the host then Ansible will provision these users in parallel because they look like different hosts.

A starry night sky.

Even in an age dominated by containers and cloud-native solutions, Ansible remains a crucial tool, not for what it once was, but for what it has become. Its core strength—its simplicity—makes it the ideal choice for a new, pervasive use case: provisioning remote development environments.

While Docker excels at application deployment, Ansible found its niche in ensuring developers have consistent, powerful, and reproducible workspaces. This solves the persistent problem of “it works on my machine” and significantly reduces time spent on setup and maintenance. It’s a fundamental shift in how we approach development, making it more efficient and collaborative.

However, Ansible’s YAML configuration language can be cumbersome. The need to manually manage multiple files becomes a bottleneck as environments grow in complexity. This is where a code-first approach, like the one offered by BigConfig, provides a powerful solution. By leveraging the fact that JSON is valid YAML, you can use a real programming language to dynamically generate configurations. This not only makes the process more flexible and less error-prone but also opens the door to treating environment provisioning as an API—a scalable, programmable service that can instantly onboard new team members.

In short, Ansible’s journey is a testament to its adaptability. It has evolved from a tool for provisioning data centers to the cornerstone of modern remote development. Paired with a tool like BigConfig, its simple, powerful core is unlocked, proving that some of the best tools aren’t those that are replaced, but those that find a new purpose.

Would you like to have a follow-up on this topic? What are your thoughts? I’d love to hear your experiences.

Reimplementing the AWS EKS API with Clojure using BigConfig, Rama, and Pedestal

K8s

The world of cloud infrastructure often involves interacting with complex APIs. While services like AWS EKS provide robust management for Kubernetes clusters, there might be scenarios where you need a more tailored or localized control plane. This article will guide you through reimplementing the AWS EKS API using a powerful Clojure stack: Pedestal for the API, BigConfig to wrap Terraform and Ansible in a workflow, and Rama for state and jobs.

Before we dive into the how, let’s consider the why. K8s, Spark, ClickHouse, Postgres, and so on are all good candidates for an in-house software as a service solution. Reimplementing a cloud API might seem counterintuitive, but it can be beneficial for:

  • Avoiding vendor lock-in: This can be relevant for some companies.
  • Multi-cloud strategy: You need an EKS-like solution in multiple cloud providers and you need a generic API.
  • Saas: You have an open source software and the Saas is your source of revenue.
  • Metal: You cannot use the cloud but you want to provide the same developer experience in your company.
  • Integration costs: Buying EKS and integrating it with the rest of your infrastructure is not feasible or very expensive. Building an EKS-like solution is cheaper.

Disclaimer: This is a simplified blueprint for educational and experimental purposes. It will not cover the full breadth and complexity of the actual AWS EKS API.

Here’s a quick overview of the tools we’ll be using:

  • BigConfig: A workflow and a template engine that enables us to have a zero-cost build step before running any devops tool like Terraform or Ansible.
  • Rama: A distributed stream processing and analytics engine that can also function as a durable, highly concurrent data store. We’ll use Rama to manage our cluster definitions and state.
  • Pedestal: A comprehensive web framework for Clojure that emphasizes data-driven development and offers excellent support for both synchronous and asynchronous request handling. It will serve as our API gateway.

Let’s imagine the core entities we want to manage: EKS Clusters. For simplicity, we’ll focus on creating and describing clusters.

  • Reuse GitOps: Building a single K8s cluster with GitOps should be reuseable. Replacing GitOps with an API should not require to reimplement everything from scratch. The solution should contain a more generalized version of the GitOps one for one cluster.
  • Declerative when possible: Terraform should be used to create resources instead of the AWS APIs whenever it is possible.
Diagram
  • Pedestal API: to create and describe clusters.
  • Rama Module: to store the desired state, and invoke the BigConfig module with the desired state.
  • BigConfig Module: this is where the heavy lifting is happening:
    • Workflow: achieving the desired state will require multiple steps.
    • Lock: to be sure that changes are ACID.
    • Build: to generate the configuration files for Terraform based on the desired state.
    • Apply: to run terraform apply programmatically.
  • Modularity: Every deliverable can be developed in parallel by adopting contracts.
  • Uniformity: BigConfig, Rama, and Pedestal deliverables are all written in Clojure.
  • Declerative: Creating an EC2 instance programmatically can be done faster with Terraform and we don’t need to worry about the life cycle management.
  • Reusability: The GitOps code can be reused. The code to provision one K8s cluster with GitOps or multiple K8s clusters with an API, doesn’t require to change from Terraform to the AWS SDK. The API is just a virtual admin. The GitOps version developed interactivly by an admin can be package as a Clojure dependency and reused inside Rama. This is a killer feature of BigConfig.

I’m working on the code right now. Stay tuned, I will update the blog post as soon as I have the first version.

This is a basic example, but you can extend it significantly:

  • More EKS Features: Implement more aspects of the EKS API, such as node groups, Fargate profiles, or update operations.
  • Authentication and Authorization: Integrate with a robust authentication system to secure your API.
  • Error Handling: Implement more sophisticated error handling and meaningful error messages by adopting OpenTelemetry.

By combining BigConfig, Rama, and Pedestal, we’ve built a foundation for a in-house EKS-like API in Clojure. This approach provides a high degree of control, flexibility, and the ability to tailor your infrastructure management precisely to your needs. This project serves as an excellent starting point for exploring the potential of building custom cloud-native services with Clojure.

Would you like to have a follow-up on this topic? What are your thoughts? I’d love to hear your experiences.

The killer feature of BigConfig

Killer Feature

For anyone working with Infrastructure as Code (IaC), managing configurations and deployments efficiently is key. Engineers are constantly seeking ways to enhance their workflows. Today, we’re diving into a powerful combination: OpenTofu and BigConfig , highlighting a killer feature that makes your build step practically invisible!

IaC tools like OpenTofu (an open-source alternative to Terraform) empower teams to define, provision, and manage infrastructure through code. However, as projects scale, especially in complex environments, the build and deployment process can become a multi-step chore. This often involves:

  • Git checks: Ensuring your working directory is clean and up-to-date.
  • Lock acquire: Making sure that changes are apply in order and incrementally.
  • Execution: Iterate on the infracoding until it works.
  • Git pushes: Committing changes back to your repository if the change is successful
  • Environment-specific deployments: Handling different configurations for different environments like staging and production.

This manual orchestration can be time-consuming and prone to errors.

Enter BigConfig: Simplifying Complex Workflows

Section titled “Enter BigConfig: Simplifying Complex Workflows”

BigConfig is a fantastic tool designed to encapsulate and automate these complex command sequences. It allows you to define a series of steps and execute them with a single command. Think of it as a smart wrapper for your common IaC operations. By centralizing these tasks, BigConfig significantly reduces cognitive load and improves consistency.

The Killer Feature: An Invisible Build Step with a Shell Alias

Section titled “The Killer Feature: An Invisible Build Step with a Shell Alias”

Here’s where the magic truly happens! By combining OpenTofu, BigConfig, and a simple shell alias, we can create an invisible build step. Imagine replacing a series of manual operations with just one, familiar invocation.

Consider this powerful shell alias:

Terminal window
alias tofu="bb render git-check lock exec git-push unlock-any -- alpha prod tofu"

Let’s break down what this alias does:

  1. alias tofu="...": This redefines your tofu command for the session. Now, whenever you type tofu, it executes BigConfig instead of tofu. Every step of the workflow is executed only if the previous step is successful.
  2. render: This is the BigConfig step to initiate a build process and achieve DRY like Atmos. If this fails, there is not reason to proceed. That’s why this step is always present and it is the first step.
  3. git-check: render should not make the Git working directory dirty and git-check ensures your Git working directory is clean and up to date.
  4. lock: It then acquires a lock for the module alpha and the profile prod, preventing concurrent changes from other developers.
  5. exec: This is the core execution step, where BigConfig will run your OpenTofu commands. In case of failure of exec, the workflow will stop and the next steps will not be executed. In particular the lock will not be released and the changes will not be pushed so that the developer can fix or revert the change.
  6. git-push: This automatically pushes the just applied change to your Git repository. You should be always one commit ahead of origin when you make changes.
  7. unlock-any: This ensures that any locks are released. any means that the owner is ignored. This step can be used alone if another developer forget to release the lock.
  8. -- alpha prod tofu: -- is the separator between the workflow definition and module, profile, and the shell command, in this case tofu.

A simple alias is now extending the capabilities of OpenTofu. Now OpenTofu has the capabilities of Atlantis and Atmos. But BigConfig is not specific to OpenTofu and it can be used also with Ansible, K8s, and your dotfiles.

Before: OpenTofu was not enough and other tools were required like Atlantis and Atmos.

After: Any DevOps tool can be augmented to have the capabilities of Atlantis and Atmos.

The sequential workflow, with all its checks, locks, and pushes, becomes completely invisible to the user. You interact with OpenTofu as you normally would, but all the surrounding boilerplate is handled automatically by BigConfig.

  • Increased Productivity: Engineers can focus on writing IaC, not on the deployment mechanics.
  • Reduced Errors: Automated checks and consistent execution minimize human error.
  • Standardized Deployments: Ensures that every deployment follows the same robust process.
  • Faster Onboarding: New team members can quickly get up to speed without memorizing complex sequences.

If you’re using OpenTofu and looking to streamline your IaC workflows, exploring BigConfig and implementing a similar shell alias is highly recommended. It’s a small change that yields massive benefits, transforming your change process from a visible chore into an invisible, seamless part of your development process. Happy infrastructure building! 🚀

Are you still using Atlantis or Atmos? What are your thoughts? I’d love to hear your experiences.