This project uses the standard workflow:
wmcs.toolforge.k8s.component.build
cookbook.deployment/values
. Commit those changes to the repository and get it merged in Gerrit.wmcs.toolforge.k8s.component.deploy
cookbook to deploy the updated image to the cluster.Follow these steps:
# if using lima-kilo (kind) $ docker build -t maintain-kubeusers . && kind load docker-image maintain-kubeusers:latest -n toolforge
$ ./deploy.sh local
Tests are run using tox, normally, and are built on pytest. As such, to run tests, install tox by your favorite method and run the tox
command at the top level of this folder.
Tests work anywhere because they use recorded mocks of the network interactions with a Kubernetes API server (usually an instance of minikube). These are recorded using vcrpy, which is integrated using pytest-vcrpy, which helps vcrpy speak pytest (using the cassettes as fixtures, etc.).
You will have to update the cassettes for tests to pass any time you change interactions with the Kubernetes API in this application. It is not as convenient as a single command, unfortunately, because it requires an LDAP system setup (with an RFC that is no longer valid enabled because that's how WMCS LDAP is set up) and a properly spun up lima-kilo testing set up.
The steps are below:
$ docker build -f Dockerfile.test -t mk-test:testcase . && kind load docker-image mk-test:testcase -n toolforge
$ ./deploy.sh vcr-recording
kubectl get pods -n maintain-kubeusers
and then get a shell on it with kubectl -n maintain-kubeusers exec -it <pod name> -- /bin/ash
.source venv/bin/activate
rm tests/cassettes/*
just to make sure you have a clean slate and run pytest --in-k8s
./data/project
inside the pod like cp -r tests/cassettes /data/project/
. Then, log out of your pod terminal (since that should all be done if all your tests passed), delete the cassettes in your active repo (rm tests/cassettes/*
), and replace them with cp ~/.toolforge-lima-kilo/chroot/data/project/cassettes/* tests/cassettes/
.tox
on the changed repo to make sure the tests do, in fact pass now.This should not be needed in most cases, but if you require it, Mediawiki Vagrant is your friend. You will need Vagrant installed.
vagrant roles enable striker
vagrant provision
vagrant forward-port 1389 389
to expose the vagrant VMs LDAP to the host.ssh -i $(minikube ssh-key) docker@$(minikube ip) -R 2389:localhost:1389
That shell must remain open to keep proxying your LDAP into the Kubernetes node.If you have set up minikube the same as for updating VCR cassettes, you'll now have a working "WMCS LDAP".