Automating TeamCity installation in Kubernetes on AWS

TeamCity is a great product, but while the normal method of configuring it through the UI is relatively easy, it's also not the best option in these days of automation and "everything-as-code". In this post I'll explore just how automated a TeamCity installation can be made.

What you'll need

  • A Kubernetes cluster - if you don't have one already then try kops
  • An empty database schema and the credentials to access it
  • Helm - the Kubernetes package manager
  • A GitHub repository to store your TeamCity Kotlin DSL in and a private deploy-key with write access to this repo.

Automation areas

There are basically 4 areas that need to be looked at:

  • Instance/container provisioning
  • The TeamCity server and agent software installation
  • The TeamCity server configuration settings (plugins, users, groups)
  • The project and build configuration

The first two are quite easy. We'll use Helm to spin up both server and agents, but it's also easily archivable with something like Cloudformation and Chef cookbooks or Ansible roles if that is more to your liking.

The third step and fourth step is where it kind of falls apart. The server configuration settings are stored in a combination of xml files and the database and while TeamCity has an option to pull project configuration in from git, for that to work you have to configure a VCS root (typically through the UI), including the secrets needed to access it and switch configuration synchronization on.

Assuming you have the prerequisites, lets begin.

First let's create a new namespace in Kubernetes for teamcity:
kubectl create namespace teamcity

Then add two k8n secrets containing the private key (here github-key.pem) and our database properties file.
kubectl -n teamcity create secret generic github-key-secret --from-file=github-key.pem

Create a database.properties file from this template:

connectionProperties.user=<YOUR_DB_USER>
connectionProperties.password=<YOUR_DB_PASSWORD>
connectionUrl=jdbc:mysql://<YOUR_DB_HOST>:3306/<SCHEMA>

Then store it as a secret:
kubectl -n teamcity create secret generic tc-db-properties --from-file=./database.properties

Remember to delete the file afterwards and see https://kubernetes.io/docs/concepts/configuration/secret/ for the risks associated with storing sensitive information as k8n secrets.

The Helm Chart

In you're unfamiliar with Helm charts, it's a way to define, install and upgrade kubernetes based applications. This one is made up of several yaml files that decribe the various components using templating.

.
└── teamcity
    ├── Chart.yaml
    ├── templates
    │   ├── agent_deployment.yaml
    │   ├── _helpers.tpl
    │   ├── server_configmap.yaml
    │   ├── server_deployment.yaml
    │   ├── server_pvc.yaml
    │   └── server_service.yaml
    └── values.yaml

We'll be creating two build agents (agent_deployment.yaml), a TeamCity server (server_deployment.yaml), a service to expose it to the world (server_service.yaml) and a persistant volume chain (server_pvc.yaml).

Configuration is stored in values.yaml

# Default values for teamcity.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
server:
  name: tc-server
  replicaCount: 1
  plugins:
    teamcity-oauth-1.1.6.zip: https://github.com/pwielgolaski/teamcity-oauth/releases/download/teamcity-oauth-1.1.6/teamcity-oauth-1.1.6.zip
    teamcity-kubernetes-plugin.zip: https://teamcity.jetbrains.com/guestAuth/app/rest/builds/buildType:TeamCityPluginsByJetBrains_TeamCityKubernetesPlugin_Build20172x,tags:release/artifacts/content/teamcity-kubernetes-plugin.zip
    slackintegration.zip: https://github.com/alexkvak/teamcity-slack/releases/download/v1.1.8/slackIntegration.zip
  image:
    repository: jetbrains/teamcity-server
    tag: latest
    pullPolicy: IfNotPresent
  service:
    name: teamcity-server
    type: LoadBalancer
    servicePort: 8111
  persistentVolume:
    enabled: true
    accessModes:
    - ReadWriteOnce
    annotations: {}
    # If defined, PVC must be created manually before volume will be bound
    existingClaim: ""
    mountPath: /data/teamcity_server/datadir
    size: 5Gi
    storageClass: ""
    subPath: ""
  resources: {}
    #limits:
    #  cpu: 100m
    #  memory: 128Mi
    #requests:
    #  cpu: 100m
    #  memory: 128Mi
agent:
  name: tc-agent
  replicaCount: 2
  image:
    repository: jetbrains/teamcity-agent
    tag: latest
    pullPolicy: IfNotPresent

We specify a service to expose the resulting server with a type: LoadBalancer. After we're done, we'll be able to access the TeamCity service through an AWS ELB.

In addition to this, we have a server_configmap.yaml defining the script that is used to download the plugins specified above.

We'll use these in server_deployment.yaml:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: {{ template "fullname" . }}
  labels:
    app: {{ template "name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    component: {{ .Values.server.name }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  replicas: {{ .Values.server.replicaCount }}
  template:
    metadata:
      labels:
        app: {{ template "name" . }}
        component: {{ .Values.server.name }}
        release: {{ .Release.Name }}
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/server_configmap.yaml") . | sha256sum }}
    spec:
      initContainers:
        - name: download-plugins
          image: busybox
          command:
            - "sh"
            - "/download_plugins.sh"
          volumeMounts:
            - name: plugins
              mountPath: /plugins
            - name: config
              mountPath: /download_plugins.sh
              subPath: download_plugins.sh
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.server.image.repository }}:{{ .Values.server.image.tag }}"
          imagePullPolicy: {{ .Values.server.image.pullPolicy }}
          env:
            - name: TEAMCITY_SERVER_OPTS
              value: ""
          ports:
            - containerPort: 8111
          resources:
{{ toYaml .Values.server.resources | indent 12 }}
          volumeMounts:
          - name: data
            mountPath: /data/teamcity_server/datadir
            readOnly: false
          - name: config
            mountPath: /data/teamcity_server/datadir/config/database.properties
            subPath: database.properties
            readOnly: true
          - name: plugins
            mountPath: /data/teamcity_server/datadir/plugins
            readOnly: false
          - name: secret-volume
            readOnly: false
            mountPath: /data/teamcity_server/secrets
      volumes:
      - name: secret-volume
        secret:
          secretName: github-key-secret
      - name: plugins
        emptyDir: {}
      - name: config
        secret:
          secretName: tc-db-properties
      - name: data
      {{- if .Values.server.persistentVolume.enabled }}
        persistentVolumeClaim:
          claimName: {{ if .Values.server.persistentVolume.existingClaim }}{{ .Values.server.persistentVolume.existingClaim }}{{- else }}{{ template "fullname" . }}{{- end }}
      {{- else }}
        emptyDir: {}
      {{- end -}}

What we are doing here is first running a busybox container to download the plugins specified in values.yaml. Then the actual TeamCity container and the volumes mounted in it.

There are a few special volumeMounts specified here:

Our database.properties secret is exposed to the server:

- name: config
            mountPath: /data/teamcity_server/datadir/config/database.properties
            subPath: database.properties
            readOnly: true

The downloaded plugins are mounted and will be available in the TeamCity server

          - name: plugins
            mountPath: /data/teamcity_server/datadir/plugins
            readOnly: false

Finally the private part of the deploy key is mounted

          - name: secret-volume
            readOnly: false
            mountPath: /data/teamcity_server/secrets

Spinning it all up with Helm
helm install teamcity/ --namespace teamcity

You should see output similar to:

NAME:   idolized-squirrel
LAST DEPLOYED: Wed Aug  1 09:49:35 2018
NAMESPACE: teamcity
STATUS: DEPLOYED

And quite a few resources in status <pending>

Wait a little bit for AWS to create your EBS volume and the loadbalancer, then run
kubectl -n teamcity get services -o wide

You should now see the external address of the loadbalancer:

NAME       TYPE           CLUSTER-IP      EXTERNAL-IP                                                              PORT(S)          AGE       SELECTOR
teamcity   LoadBalancer   100.65.213.41   a6f5d40bf955f11e8b5d20af1d1cd185-419810296.eu-west-1.elb.amazonaws.com   8111:31735/TCP   42m       app=teamcity,component=tc-server,release=idolized-squirrel

Configure TeamCity to use project settings from VCS

Now we'll unfortunately have to do a few steps by hand

Paste the loadbalancer address into a browser (remember the port) and you should be presented with a screen asking you to accept the license and input an Administrator username and password

Having logged in as our new Administrator user, we'll configure TeamCity to use our predefined VCS root and stores it's project configuration there

Click Administration in the top right corner

Select the <Root Project> and click VCS Roots in the left menu.
Setup a new VCS root using a custom key and specify /data/teamcity_server/secrets/github-key.pem as private key path.

Having done that, select Show more and then Versioned settings in the left menu.

You should see a screen like below, but with Synchronization disabled, change that to enabled and configure it like shown

teamcity_versioned_settings

Click the Apply button and TeamCity will store the current settings in your git repo. I've used Kotlin in the example, but you can also store the settings as XML.

To add subprojects, clone this repo and edit the settings in your favorite IDE. For an introduction to using Kotlin to setup projects in TeamCity, see: https://confluence.jetbrains.com/display/TCD18/Kotlin+DSL however the documentation is severely lacking in examples and in reality you'll want to resort to making the change in the UI and then viewing the generated DSL (look for the "view DSL" button") to get familiar with the syntax.

As a minimum you'll probably also want to add some users and go to Agents in the top menu and authorize the two build agents we've installed.

Conclusion

Unfortunately that is about as far as TeamCity automation gets when it comes to installation. While you can edit the XML that defines the server configuration, it needs to be in sync with the data stored in the database. It's worth pointing out that the settings XML is so tied to the DB that you can't even install a new TeamCity server and reuse an existing database without also restoring a backup of the XML. Backups can be scheduled under Administration -> Backups

Does that mean you shouldn't automate it? Not at all. While the initial installation have a few manual steps, running TeamCity in a container and having as much as possible defined in code is still a great step up from a manually provisioned server and the real wins come when you start scripting your projects with kotlin and placing the code in a git repo instead of using the UI. More on that in a later article.