Skip to content

Configuration Hell? How BigConfig Tames the Modern Dev Environment

rama-jdbc

Setting up a local development environment today is rarely a trivial matter. The days of simply git clone and npm install are long gone. Modern architectures, particularly those embracing microservices, polyglot persistence, and cloud-native practices, have turned the humble setup process into a multi-layered nightmare.

If you’ve ever spent an afternoon debugging why your local database port clashes with your integration environment, or wrestled with five different tools requiring three different credential formats, you know the pain.

Let’s dive into a concrete example — a complex but typical setup — and see how BigConfig transforms this chaos into an automated, zero-cost development experience.

The Configuration Challenge: A Deep Dive into Rama JDBC

Section titled “The Configuration Challenge: A Deep Dive into Rama JDBC”

In our case, we’re configuring the rama-jdbc development environment. Rama JDBC implements the robust Outbox Pattern using SQL triggers. This already introduces a host of configuration demands:

  1. Cloud-Native Credentials: Connecting to our staging AWS RDS instance to introspect and replicate the target schema requires fetching credentials securely from AWS Secrets Manager.
  2. Polyglot Tooling: Our system is a mix of languages and utilities:
    • Backend services are in Go, using sql-migrate for database migrations.
    • Automation and utility scripts are implemented using Babashka and the Just task runner.
  3. Local Database Mirroring: To ensure parity, we need a local SQL database. We use Process Compose to manage its lifecycle, which needs distinct configurations for dev and test.
  4. SQL UI Pain Points: For schema inspection, we use the DuckDB UI. Critically, DuckDB only supports connection via an initialization SQL file, not environment variables.
  5. Conflicting Formats: We have tools that require a simple host and port, while others demand a full JDBC URL.

The configuration complexity peaks when dealing with TCP ports. To manage multiple concurrent development contexts (e.g., a Dev feature branch and a Test branch), we use Git Worktrees.

This means ports cannot be the default ones; they must be dynamic but deterministic, dependent on:

  • The service name.
  • The path of the current working directory.

In total, this setup requires managing five dynamic ports:

  • SQL Server (Dev & Test instances)
  • Process Compose (Dev & Test instances)
  • DuckDB UI

Manually managing and passing these five dynamic ports and all the various configuration formats (JDBC URL, host/port, SQL init file) across Go, Babashka, Just, DuckDB, and the main Clojure application is a recipe for errors and lost development time.

The BigConfig Solution: configuration as code

Section titled “The BigConfig Solution: configuration as code”

This is where BigConfig steps in. By leveraging a single, centralized configuration system written in Clojure, we can define the logic for all these variables, ensuring consistency across every tool.

BigConfig allows us to define a master configuration where:

  1. Port Calculation Logic is Centralized: A single Clojure function can take the (service-name path) as input and deterministically calculate the correct TCP port for the current worktree.
  2. Format Transformation is Automated: The same central configuration can:
    • Calculate the host and port.
    • Automatically assemble the JDBC URL for Clojure tools.
    • Generate the specific SQL initialization file required by DuckDB.
  3. Secure Credentials Injection: Logic to securely fetch credentials from AWS Secrets Manager is handled once, and the resulting secrets are injected into the necessary configurations for both the Go migrations and the local server.

The biggest win is the developer experience. The entire end-to-end setup, which was a “nightmare” of manual steps, is now reduced to a single, idempotent command:

Terminal window
bb build

The Babashka task runner uses BigConfig to coordinate everything: calculating ports, generating configuration files, and ensuring full parity between the local development container and the GitHub CI Runner. The “build” step becomes a zero-cost operation that simply ensures your environment is perfectly configured and ready to go.

Ultimately, development productivity is measured by the speed of the feedback loop. Even with a complex, port-heavy environment, the Clojure REPL workflow remains lightning-fast.

Most of the time, development is spent hot-reloading code:

  • Change code → Evaluate expression → Inspect result. This takes milliseconds.

Crucially, when a database change is necessary, BigConfig’s automation keeps the environment responsive:

  • Need a new migration → Rebuild the database. This, too, takes milliseconds, thanks to the deterministic and automated environment setup.

BigConfig doesn’t just manage complexity; it preserves the core advantage of Clojure — the immediate, fluid feedback loop of the REPL — even when faced with the configuration hurdles of a complex, modern, polyglot application. It allows you to focus on the code, not the configuration.

You can verify the developer experience described here. Just clone rama-jdbc inside big-container .

Terminal window
docker run -it --rm ghcr.io/amiorin/big-container
git clone https://github.com/amiorin/rama-jdbc workspaces/rama-jdbc
cd workspaces/rama-jdbc
# First time will take minutes and devenv progress is not showed in the terminal.
bb build
# Second time will take milliseconds
time bb build
# Start postgres and run the migrations for the dev environment
just pc-dev &
# Run the test for the test environment
clojure -M:shared:test

Would you like to have a follow-up on this topic? What are your thoughts? I’d love to hear your experiences.