Commit 9b10a961 authored by Daniel Stone's avatar Daniel Stone
Browse files

Update README.md

It's been a while, and this repo has somewhat outgrown its initial
purpose. Update our README.md to explain more about the different parts
of the repo.

Fixes: #1

Signed-off-by: Daniel Stone's avatarDaniel Stone <daniels@collabora.com>
parent 8ebce615
Someone should probably write a real README, for how to run this without
accidentally destroying our storage.
# freedesktop.org Helm deployment
Anyway, you'll need the `fdo-gitlab` project active in Kubernetes, and Helm
installed. You'll also need the helm-gitlab-omnibus (our custom chart),
helm-gitlab-config (our running configuration), and
helm-gitlab-secrets (ssshhhhhh) repositories checked out.
This repository holds the configuration for the deployment of our GitLab service using Helm, as well as auxiliary services such as Grafana and the GitLab runners.
## Deploying GitLab itself
First, you will need access to the `fdo-gitlab` Kubernetes project, which you'll have if you're an admin.
Make sure you have this repository checked out, as well as the `helm-gitlab-omnibus` and `helm-gitlab-secrets` repositories. This repository holds the public master configuration; the Omnibus repository holds our fork of the GitLab Omnibus Helm chart which has been updated to support newer versions, as well as adapted to our deployment. This chart is no longer maintained upstream, but works well enough for us whilst we gradually move to GitLab's [cloud-native chart](https://docs.gitlab.com/charts/). The secrets repository contains passwords and API keys which cannot be made public.
Most configuration changes are made to `config.yaml` in this repository.
Check that you can see the running services:
```
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
gitlab-prod-gitlab 1 1 1 1 5h
gitlab-prod-gitlab-postgresql 1 1 1 1 7h
gitlab-prod-gitlab-redis 1 1 1 1 7h
gitlab-prod-gitlab-runner 1 1 1 1 7h
NAME READY UP-TO-DATE AVAILABLE AGE
gitlab-prod-cainjector 1/1 1 1 4d4h
gitlab-prod-cert-manager 1/1 1 1 4d4h
gitlab-prod-gitlab 1/1 1 1 524d
gitlab-prod-gitlab-postgresql 1/1 1 1 524d
gitlab-prod-gitlab-redis 1/1 1 1 524d
gitlab-prod-grafana 1/1 1 1 4d
gitlab-prod-nginx-ingress-controller 3/3 3 3 4d4h
gitlab-prod-nginx-ingress-default-backend 2/2 2 2 4d4h
gitlab-prod-prometheus-server 1/1 1 1 4d1h
```
Change into the Omnibus chart directory, make any changes, and check the
changes with a dry run:
Change into the Omnibus chart directory, make any changes, and check the changes with a dry run:
```
$ cd helm-gitlab-omnibus
$ helm dependency update
$ cd ../helm-gitlab-omnibus
$ helm upgrade --dry-run -f ../helm-gitlab-config/config.yaml -f ../helm-gitlab-secrets/secrets.yaml gitlab-prod .
```
This will spew out the generated Kubernetes chart, which you can double-check.
If you're happy with it, you can deploy for real by removing `--dry-run`.
This will spew out the generated Kubernetes chart, which you can double-check. If you're happy with it, you can deploy for real by removing `--dry-run`.
Once it's up, wait for the instance to come back:
```
$ kubectl get deployment -w gitlab-prod-gitlab
```
You'll see AVAILABLE as 0 whilst GitLab restarts, then come back up as 1 when
it's started.
You'll see AVAILABLE as 0 whilst GitLab restarts, then come back up as 1 when it's started.
You can run `kubectl get pods` to discover the name of the K8s pod that GitLab is running in. Once you have the name of the pod, you can run various commands in it, such as looking at the logs: `kubectl exec gitlab-prod-gitlab-307067958-z7s47 -c gitlab -it gitlab-ctl tail`
or opening a Ruby console and destroying as much data as possible: `kubectl exec gitlab-prod-gitlab-307067958-z7s47 -c gitlab -it gitlab-rails console production`
## Deploying GitLab runners
We currently have two fleets of GitLab runners - one hosted on [Packet](https://www.packet.com) and administered by us, and another hosted on [Hetzner](https://www.hetzner.com) and administered by the GStreamer Foundation.
You can run `kubectl get pods` to discover the name of the K8s pod that GitLab
is running in. Once you have the name of the pod, you can run various commands
in it, such as looking at the logs:
`kubectl exec gitlab-prod-gitlab-307067958-z7s47 -c gitlab -it gitlab-ctl tail`
The configuration and deployment of the Packet runners is automated and maintained in this repository.
or opening a Ruby console and destroying as much data as possible:
`kubectl exec gitlab-prod-gitlab-307067958-z7s47 -c gitlab -it gitlab-rails console production`
Runners are created with `gitlab-runner-provision/gitlab-runner-packet.sh`, which requires environment variables to be set for the Packet project ID and API key, as well as the gitlab.fd.o shared-runner registration token. This script will generate a cloud-init YAML file to provision the runner, create a new server instance with Packet, and deploy it.
The runners should mostly not be reconfigured on the fly with SSH, but instead deleted and reprovisioned.
### Installing from scratch
## Creating a new GitLab deployment
Something bad has happened! Time to rebuild from scratch. :(
First, and this is **important**, make sure you manually zap the
liveness/readiness checks inside the Helm chart. If you don't do this, when you
get to the 'stop GitLab' point, K8s will zap the pod (because the service has
failed) and restart it. This is not what you want.
First, and this is **important**, make sure you manually zap the liveness/readiness checks inside the Helm chart. If you don't do this, when you get to the 'stop GitLab' point, K8s will zap the pod (because the service has failed) and restart it. This is not what you want.
Anyway.
......@@ -62,8 +70,6 @@ Install the Helm chart from scratch:
```
$ cd helm-gitlab-omnibus
$ helm repo add jetstack https://charts.jetstack.io/
$ helm dependency update
$ helm install -f ../helm-gitlab-config/config.yaml -f ../helm-gitlab-secrets/secrets.yaml --name gitlab-prod .
```
......@@ -79,9 +85,7 @@ $ kubectl get pods
$ kubectl exec gitlab-prod-gitlab-307067958-z7s47 -c gitlab -it /bin/bash
```
Pull `/etc/gitlab/gitlab-secrets.json` from the `helm-config-secrets` repo
manually, and pull the latest backup from Google storage. When you have these
locally, push them into the pod:
Pull `/etc/gitlab/gitlab-secrets.json` from the `helm-config-secrets` repo manually, and pull the latest backup from Google storage. When you have these locally, push them into the pod:
```
$ kubectl cp gitlab-secrets.json gitlab-prod-gitlab-65c89cc6b7-bmgmx:/tmp/ -c gitlab
$ kubectl cp ~/tmp/1520868391_2018_03_12_10.5.4_gitlab_backup.tar gitlab-prod-gitlab-65c89cc6b7-bmgmx:/tmp/ -c gitlab
......@@ -99,11 +103,8 @@ $ gitlab-ctl restart
$ gitlab-rake gitlab:check SANITIZE=true
```
If you're fortunate, this has in fact worked. At this point, you can re-enable
the readiness/liveness checks, push this with `helm upgrade` as per above, and
cross your fingers that you've saved the day.
If you're fortunate, this has in fact worked. At this point, you can re-enable the readiness/liveness checks, push this with `helm upgrade` as per above, and cross your fingers that you've saved the day.
This process has in fact worked once before, so it's probably fine. For backup
caveats, note https://gitlab.com/charts/charts.gitlab.io/issues/96
This process has in fact worked once before, so it's probably fine. For backup caveats, note https://gitlab.com/charts/charts.gitlab.io/issues/96
Good luck!
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment