Application Demo: Takahē
This tutorial explains how to run Takahē on APPUiO Cloud.
If you aren’t familiar with issuing commands on a terminal session, we recommend familiarizing yourself with it before continuing this tutorial.
Requirements
To follow this guide, please make sure that you have the following tools installed:
oc
-
You can download the OpenShift command directly from APPUiO Cloud, selecting the help menu (marked as a question mark) and selecting the "Command line tools" entry
About the Application
Takahē is a new ActivityPub server, designed for efficient use on small- to medium-size installations, and which allows you to host multiple domains on the same infrastructure.
Step 1: Create a Project
All the following steps are currently only working on the Exoscale APPUiO Zone because of the Application Catalog service availability. |
Follow these steps to login to APPUiO Cloud on your terminal:
-
Login to the APPUiO Cloud console:
oc login --server=https://api.${zone}.appuio.cloud:6443
You can find the exact URL of your chosen zone in the APPUiO Cloud Portal.
This command displays a URL on your terminal:
You must obtain an API token by visiting https://oauth-openshift.apps.${zone}.appuio.cloud/oauth/token/request
-
Click on the link above and open it in your browser.
-
Click "Display token" and copy the login command shown as "Log in with this token"
-
Paste the
oc login
command on the terminal:oc login --token=sha256~_xxxxxx_xxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxx-X \ --server=https://api.${zone}.appuio.cloud:6443
-
Create a new project called "[YOUR_USERNAME]-application-demo"
oc new-project "[YOUR_USERNAME]-application-demo"
Step 2: Deploy the application
To deploy the application we will use standard Kubernetes objects.
Save the example YAML Kubernetes resources into a file and apply it with oc apply -f <myfile.yaml>
.
-
First, we need a database and an object storage bucket:
→ Replace[YOUR_USERNAME]
before applying to the cluster, as bucket names need to be unique.Service ordering from the VSHN Application CatalogapiVersion: exoscale.appcat.vshn.io/v1 kind: ExoscalePostgreSQL metadata: name: example-app spec: writeConnectionSecretToRef: name: postgresql-creds --- apiVersion: appcat.vshn.io/v1 kind: ObjectBucket metadata: name: example-app spec: parameters: bucketName: [YOUR_USERNAME]-my-example-app-bucket region: ch-gva-2 writeConnectionSecretToRef: name: objectbucket-creds
This will create a PostgreSQL DBaaS instance with default settings, and an object storage bucket. See the AppCat docs for PostgreSQL and the AppCat docs for ObjectBucket for more information.
-
And we need some other supporting services for Takahē:
Supporting servicesapiVersion: apps/v1 kind: Deployment metadata: name: inbucket labels: app.kubernetes.io/name: inbucket spec: selector: matchLabels: app.kubernetes.io/name: inbucket replicas: 1 strategy: type: Recreate template: metadata: labels: app.kubernetes.io/name: inbucket spec: containers: - name: inbucket image: docker.io/inbucket/inbucket:latest imagePullPolicy: Always env: - name: INBUCKET_MAILBOXNAMING value: full ports: - containerPort: 9000 name: http protocol: TCP - containerPort: 2500 name: smtp protocol: TCP resources: limits: {} requests: {} --- apiVersion: v1 kind: Service metadata: name: inbucket-web labels: app.kubernetes.io/name: inbucket spec: type: ClusterIP sessionAffinity: None ports: - name: web port: 8080 protocol: TCP targetPort: http selector: app.kubernetes.io/name: inbucket --- apiVersion: v1 kind: Service metadata: name: inbucket-smtp labels: app.kubernetes.io/name: inbucket spec: type: ClusterIP sessionAffinity: None ports: - name: smtp port: 2500 protocol: TCP targetPort: 2500 selector: app.kubernetes.io/name: inbucket
This will create an Inbucket mail catcher to catch mails sent by Takahē.
-
Before we can deploy the application, a few parameters need to be replaced:
→ The secret value in thetakahe-secrets
secret (ReplaceCHANGEMESECRET
with a random string, for example generated withpwgen 32
.
→ ReplaceCHANGEMEBUCKETNAME
with the name of the bucket ordered in step 1.Application deployment with all other needed resourcesapiVersion: v1 kind: Secret metadata: name: takahe-secrets stringData: TAKAHE_SECRET_KEY: CHANGEMESECRET TAKAHE_EMAIL_SERVER: smtp://user:pass@inbucket-smtp:2500/ --- apiVersion: v1 kind: ConfigMap metadata: name: takahe-config data: TAKAHE_MEDIA_BACKEND: "s3://sos-ch-gva-2.exo.io/CHANGEMEBUCKETNAME" TAKAHE_USE_PROXY_HEADERS: "true" TAKAHE_AUTO_ADMIN_EMAIL: myuser@example.com --- apiVersion: apps/v1 kind: Deployment metadata: name: webserver spec: selector: matchLabels: app.kubernetes.io/name: demo-app replicas: 1 template: metadata: labels: app.kubernetes.io/name: demo-app spec: containers: - name: webserver image: jointakahe/takahe:0.6 args: - "gunicorn" - "takahe.wsgi:application" - "-w" - "6" - "-b" - "0.0.0.0:8000" envFrom: - configMapRef: name: takahe-config - secretRef: name: takahe-secrets - secretRef: name: objectbucket-creds env: - name: TAKAHE_DEBUG value: "false" - name: TAKAHE_DATABASE_SERVER valueFrom: secretKeyRef: name: postgresql-creds key: POSTGRESQL_URL ports: - containerPort: 8000 name: web resources: requests: cpu: 10m limits: memory: "1024Mi" cpu: 1 livenessProbe: httpGet: path: / port: 8000 periodSeconds: 5 readinessProbe: httpGet: path: / port: 8000 initialDelaySeconds: 5 periodSeconds: 5 startupProbe: httpGet: path: / port: 8000 initialDelaySeconds: 2 failureThreshold: 30 periodSeconds: 2 --- apiVersion: apps/v1 kind: Deployment metadata: name: stator spec: selector: matchLabels: app.kubernetes.io/name: stator replicas: 1 template: metadata: labels: app.kubernetes.io/name: stator spec: containers: - name: stator image: jointakahe/takahe:0.6 args: - python3 - manage.py - runstator envFrom: - configMapRef: name: takahe-config - secretRef: name: takahe-secrets - secretRef: name: objectbucket-creds env: - name: TAKAHE_DATABASE_SERVER valueFrom: secretKeyRef: name: postgresql-creds key: POSTGRESQL_URL resources: requests: cpu: 10m limits: memory: "1024Mi" cpu: 1 --- apiVersion: batch/v1 kind: Job metadata: name: migrate spec: ttlSecondsAfterFinished: 120 template: spec: restartPolicy: Never containers: - name: webserver image: jointakahe/takahe:0.6 args: ["python3", "manage.py", "migrate"] ports: - containerPort: 8000 envFrom: - configMapRef: name: takahe-config - secretRef: name: takahe-secrets - secretRef: name: objectbucket-creds env: - name: TAKAHE_DATABASE_SERVER valueFrom: secretKeyRef: name: postgresql-creds key: POSTGRESQL_URL resources: requests: cpu: 10m limits: memory: "1024Mi" cpu: 1 --- apiVersion: v1 kind: Service metadata: name: example-app labels: app.kubernetes.io/name: demo-app spec: type: ClusterIP sessionAffinity: None ports: - name: web port: 443 protocol: TCP targetPort: web selector: app.kubernetes.io/name: demo-app --- apiVersion: route.openshift.io/v1 kind: Route metadata: name: example-app spec: port: targetPort: web to: kind: Service name: example-app weight: 100 wildcardPolicy: None tls: termination: edge insecureEdgeTerminationPolicy: Redirect
-
Now wait until your pod appears with the status "Running":
oc get pods --watch
-
Last but not least retrieve the URL of your WordPress blog:
oc get route example-app -o jsonpath='{.spec.host}'
You can now register to your Takahē instance on /auth/signup/
with the user myuser@example.com
(this one will automatically become the admin user, as per the configuration).
The registration link can be retrieved in the Inbucket instance:
oc port-forward svc/inbucket-web 8080:8080
And then open a browser at localhost:8080.
The E-Mail will contain a wrong URL, just catch the /auth/reset/…
part and append it to the URL retrieved in step 5.
This example configuration isn’t meant for a production ready service. |
What’s next?
For a production ready service, we recommend the following parts to be implemented and configured:
-
While the database is already being backed up because it’s a managed service, you should still backup your persistent volume with K8up.
-
Add some monitoring for your application:
-
Use your own URL with Let’s Encrypt.
-
Choose appropriate sizing:
-
Persistent storage volume size
-
-
Maybe you want to use an
RWX
storage class which allows you to scale your application by running multiple Pods. -
Configure proper requests and limits for your Pod.
-
Use a pinned image version and set the
imagePullPolicy
toIfNotPresent
. -
Keep your app up-to-date! Install patches and upgrades as they get available. One way to achieve that, is to use a GitOps style deployment, either push or pull, and leverage the mighty Renovate Bot to keep your image references clean.
-
Using a Helm Chart for production deployments or Kustomize setup for different stages can be an advantage.
Especially for this example application:
-
Review the documentation of the used image to learn more about the configuration and possibilities.
-
Takahē needs a proper mail sending configuration in production, we recommend using a managed mail sending service for that, for example Mailgun.
Once you’re done evaluating this example application, cleanup again to not cause any unwanted costs:
oc delete project [YOUR_USERNAME]-application-demo
We’re happy to help you running your application. Contact us and let us know how we can help. |