All software systems we build use some sort of configuration files. These configuration files change depending on the environment in which your service/system is deployed. They allow us build a single deployable unit that can be deployed in multiple environments without any code change. We just change our configuration file depending on the environment and provide our service path to an external location where the configuration file exists. And, our service uses the configuration file to bootstrap itself.
Configuration files become unwieldy if not managed well. Incorrect configuration values is one of the major reasons for system downtime. Most teams don’t write tests for their configurations so a lot of times bugs are discovered in higher environments.
I was also seeing the same problem in one of my projects. There was a lack of clarity on which configuration properties change between different environments and which remains the same. Also, in a local and lower environment I don’t mind database credentials in my configuration files but for a higher environment I don’t want them to be present in the code.
Our services have to run in at least 5 environments.
- Local: This is the developer machine. There is no Kubernetes involved here.
- Dev: A Kubernetes dev cluster for development and integration
- QA: A Kubernetes cluster for quality assurance
- UAT: A kubernetes cluster for business user testsing
- Production: A kubernetes cluster for production
In local also we have two profiles – local and local-vpn. The local-vpn profiles makes it easy for developers to use services deployed on the dev Kubernetes cluster for local development.
In each Microservice we have 4 configuration files.
- application.yml
- application-local.properties
- application-local-vpn.properties
- common.properties
The application.yml
defines when which configuration file will be used.
spring:
config:
name: my-service
import: common.properties
profiles:
active: local-vpn
---
spring:
config:
activate:
on-profile: local
import: application-local.properties
---
spring:
config:
activate:
on-profile: local-vpn
import: application-local-vpn.properties
---
spring:
config:
activate:
on-profile: dev
import: /etc/config/my-service-dev.properties
---
spring:
config:
activate:
on-profile: uat
import: /etc/config/my-service-uat.properties
---
spring:
config:
activate:
on-profile: prod
import: /etc/config/my-service-prod.properties
The common.properties
has the base properties that don’t change between environments. An example of such a file is shown below.
spring.application.name=my-service
server.port=19999
server.shutdown=graceful
spring.lifecycle.timeout-per-shutdown-phase=30s
management.endpoints.web.exposure.include=diskSpace,ping,health,auditevents,beans,info,metrics,env,prometheus
server.compression.enabled=true
server.compression.mime-types=application/json,application/xml,text/html,text/xml,text/plain
spring.jackson.property-naming-strategy=SNAKE_CASE
## Postgres Database
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect
spring.jpa.hibernate.ddl-auto=none
spring.jpa.show-sql=true
## Redis
spring.redis.timeout=1000
# swagger-ui
springdoc.swagger-ui.path=/api-docs/swagger-ui
springdoc.api-docs.path=/api-docs
server.use-forward-headers=true
server.forward-headers-strategy=framework
## Service specific configuration
myservice.config.message=hello
Next, our application-local.properties
adds local development configuration on top of base properties. It can also override base properties if required.
## Postgres Datasource
spring.datasource.url=jdbc:postgresql://localhost:5432/myservicedb
spring.datasource.username=postgres
spring.datasource.password=postgres
#Redis configuration
spring.redis.host=localhost
spring.redis.port=6379
spring.redis.ssl=false
Similarly, our application-local-vpn.properties
adds local development configuration. But, instead of relying on local services it relies on the ones that are made available over the VPN.
## Postgres Datasource
spring.datasource.url=jdbc:postgresql://dev-xyz.aps1.rds.amazonaws.com:5432/myservicedb
spring.datasource.username=postgres
spring.datasource.password=&YGJBJUIKJHNBBBBNNKKNK
#Redis configuration
spring.redis.host=dev-xyz.aps1.cache.amazonaws.com
spring.redis.port=6379
spring.redis.ssl=true
## Redis
spring.redis.timeout=5000
As you can see it uses AWS services available over VPN instead of running locally. Also, we override spring.redis.timeout=5000
for this profile since VPN could be slow sometimes.
Each service also creates one more file called configmap.example
to help the platform team know which properties they have to configure for each environment.
spring.profiles.active=
## Postgres Config
spring.datasource.url=
spring.datasource.username=
spring.datasource.password=
## Redis Config
spring.redis.host=
spring.redis.port=
spring.redis.ssl=
For dev, uat, and prod environments our platform team creates a configmap that is exposed as a volume at the configured path.
In their environment specific Helm values.yaml file they create a configmap and expose at the required path.
volumeMounts:
- mountPath: /etc/config
name: config-app-properties
readOnly: true
config:
my-service-dev.properties: |
spring.profiles.active=dev
spring.datasource.url=jdbc:postgresql://xyz.ap-south-1.rds.amazonaws.com:5432/myservicedb
spring.redis.host=dev-xyz.aps1.cache.amazonaws.com
spring.redis.port=6379
spring.redis.ssl=true
The database username and password are picked from a secret store and set as environment variables so they are not shown here.
Why not Spring Cloud Config Server?
I decided not to go with Spring Cloud Config Server for following reasons:
- Having a centeralize configuration store adds friction and can become a bottleneck
- Spring Cloud Config server does not enable dynamic configuration reload. There is too much effort required to do it right. You have to use spring cloud bus, refresh scope, message bus like Kafka, webhook handling to do it. We are fine restarting our services to reload configuration for the time being.
- We are getting traceability in our setup as well since all configurations are maintained in Helm values.yaml files that are also stored in a Git repository.
- Our current setup is simple. I like simple stuff.