Migration Tutorial: From APPUiO Public to APPUiO Cloud
This document provides an overview of all the tasks required to move your applications from APPUiO Public to APPUiO Cloud.
Main Differences
Before starting the migration of your applications from APPUiO Public to APPUiO Cloud, please check the document explaining all the major differences between both platforms.
Removed API versions
Several Kubernetes API versions have been removed since Kubernetes 1.11, on which version APPUiO Public was based on. This means that manifests from APPUiO Public can be exported, but need to be updated before being applied on APPUiO Cloud. Often just the API version or group of an object has to be changed. For some new API versions however, a different structure or other default values apply. This has to be considered, when changing the API version in a manifest object. A non complete, but selected list of API changes related to the deployment migration:
Name | Removed | Available |
---|---|---|
ReplicaSet |
|
apps/v1 |
StatefulSet |
|
apps/v1 |
Deployment |
|
apps/v1 |
DaemonSet |
|
apps/v1 |
NetworkPolicy |
|
networking.k8s.io/v1 |
PriorityClass |
|
scheduling.k8s.io/v1 |
RBAC resources |
|
rbac.authorization.k8s.io/v1 |
Ingress |
|
networking.k8s.io/v1 |
CertificateSigningRequest |
|
certificates.k8s.io/v1 |
CustomResourceDefinition |
|
apiextensions.k8s.io/v1 |
See the Kubernetes Deprecated API Migration Guide for more detailed information.
DNS
Customers transferring their domains from APPUiO Public to APPUiO Cloud should be aware of the following:
-
Lowering your TTLs (Time To Live) 24 hours ahead of time will reduce the amount of time that your change takes to propagate. You can limit downtime by lowering that value, so the caching servers will check back more frequently. Typically, 60 seconds (1 minute) is a good idea.
Remember to set your TTL back to the previous value after the migration. -
There are different
CNAME
resource records to point to depending on which APPUiO Zone you migrate the application to; please refer to the APPUiO Zones reference document, where the correspondingCNAME
entry is specified. -
If you want to redirect the top level of a domain, consider creating
ALIAS
orANAME
records if your DNS provider supports either of those record types. If this is also not an option, you can create an A record to the currently active IP address.The ingress IP address isn’t guaranteed to be stable.
Let’s Encrypt
In APPUiO Cloud only Ingress
and certificates.cert-manager.io/v1
objects can request certificates.
It’s not possible to request Let’s Encrypt certificates in Route objects.
|
The manifest below specifies an Ingress
object with such requirement:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-production (1)
name: ${DEPLOYMENT_NAME}
namespace: ${NAMESPACE}
spec:
rules:
- host: ${ROUTE_HOST}
http:
paths:
- backend:
service:
name: nginx
port:
number: 80
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- ${ROUTE_HOST}
secretName: ${ROUTE_HOST}
1 | Annotation providing automatic provisioning of certificates on APPUiO Cloud. |
You can find more information in the Getting a Certificate through Let’s Encrypt page.
For reference, it was possible to configure TLS certificates in Route
objects in APPUiO Public, using a manifest similar to the following.
This is no longer valid in APPUiO Cloud.
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: ${DEPLOYMENT_NAME}
labels:
app: ${CI_ENVIRONMENT_SLUG}
annotations:
kubernetes.io/tls-acme: "${ROUTE_TLS_ACME}" (1)
spec:
host: ${ROUTE_HOST}
port:
targetPort: http
tls:
insecureEdgeTerminationPolicy: Redirect
termination: edge
to:
kind: Service
name: ${DEPLOYMENT_NAME}
weight: 100
wildcardPolicy: None
1 | Annotation providing automatic provisioning of certificates on APPUiO Public. |
Storage
APPUiO Cloud has different requirements regarding storage. This section provides more information.
Access the RWO volumes from different pods
When migrating your volumes to APPUiO Cloud, please follow these guidelines:
-
The deployment strategy must change from
RollingUpdate
toRecreate
, because mounting the volume at different nodes simultaneously isn’t possible, or at least not with commonly used file systems, such as ext4. This page in the Kubernetes documentation provides more information about this. -
Add an affinity rule to run on the same node as the worker pod. This allows to access the same RWO/block volume, because the file system is mounted on the node level, and ensures a read-after-write consistent access to the same file system, because it’s using the same kernel.
This could be used for jobs, but in the case of independent application parts this should be prevented. Check the Kubernetes documentation Inter-pod affinity and anti-affinity for the reasons. To create an affinity rule for all the pods at once, you can use the app.kubernetes.io/component label.
spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app.kubernetes.io/component operator: In values: - backend topologyKey: kubernetes.io/hostname
-
Perform the migration itself. There are three major mechanisms for this:
-
Migration with
rsync
-
Migration through jobs
-
Using continuous sync
-
The following sections provide information about each strategy.
Migration with oc rsync
The first variant to migrate your application storage from APPUiO Public to APPUiO Cloud consists in using oc rsync
.
Rsync is an often used and well tested tool synchronizing files between systems.
The manifests below create the required objects on the destination side.
If you are using your own, at least a pod must be running with the RWO volume mounted, the oc rsync
can connect to.
---
apiVersion: v1
kind: Namespace
metadata:
name: rsync-test
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: rsync-destination
namespace: rsync-test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: rsync-test
name: rsync-destination
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- kind: ServiceAccount
name: rsync-destination
namespace: rsync-test
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rsync-destination
namespace: rsync-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: rsync-destination
name: rsync-destination
namespace: rsync-test
spec:
selector:
matchLabels:
app.kubernetes.io/name: rsync-destination
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: rsync-destination
spec:
containers:
- image: registry.access.redhat.com/rhel7/rhel-tools
imagePullPolicy: IfNotPresent
name: rhel-tools
command:
- tail
- -f
- /dev/null
volumeMounts:
- mountPath: /rsync-destination
name: rsync-destination
volumes:
- name: rsync-destination
persistentVolumeClaim:
claimName: rsync-destination
Job-Based Migration
The second variant for migrating your storage from APPUiO Public to APPUiO Cloud consists in using jobs. The manifest below defines the objects required on a destination side.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rsync-source
namespace: rsync-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: rsync-source (1)
name: rsync-source
namespace: rsync-test
spec:
selector:
matchLabels:
app.kubernetes.io/name: rsync-source
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: rsync-source
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- rsync-source
topologyKey: kubernetes.io/hostname
containers:
- image: registry.access.redhat.com/rhel7/rhel-tools
imagePullPolicy: IfNotPresent
name: rhel-tools
command:
- tail
- -f
- /dev/null
volumeMounts:
- mountPath: /rsync-source
name: rsync-source
volumes:
- name: rsync-source
persistentVolumeClaim:
claimName: rsync-source
---
apiVersion: batch/v1beta1 (2)
kind: CronJob
metadata:
labels:
app: rsync-copy
name: rsync-copy
namespace: rsync-test
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 3
jobTemplate:
spec:
activeDeadlineSeconds: 7200
backoffLimit: 2
completions: 1
template:
metadata:
labels:
app.kubernetes.io/name: rsync-source
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- rsync-source
topologyKey: kubernetes.io/hostname
containers:
- image: quay.io/openshift/origin-cli:4.8
imagePullPolicy: IfNotPresent
name: oc-rsync
command:
- /bin/bash
- -c
- |
#!/bin/bash
oc \
--server=$K8S_API \
--token=$K8S_TOKEN \
--namespace=$K8S_NAMESPACE \
rsync \
--delete=true \
/rsync-source/ \
"$(oc --server=$K8S_API --token=$K8S_TOKEN --namespace=$K8S_NAMESPACE get pod -l app.kubernetes.io/name=rsync-destination -o jsonpath={.items[0].metadata.name}):/rsync-destination/"
env:
- name: K8S_API
value: https://<kubernetes-api>:6443
- name: K8S_TOKEN
valueFrom:
secretKeyRef:
name: rsync-destination-oc-token
key: token
- name: K8S_NAMESPACE
value: rsync-test
volumeMounts:
- mountPath: /rsync-source
name: rsync-source
restartPolicy: Never
volumes:
- name: rsync-source
persistentVolumeClaim:
claimName: rsync-source
schedule: '@yearly'
startingDeadlineSeconds: 86400
successfulJobsHistoryLimit: 1
1 | Please refer to the Kubernetes documentation on common labels. |
2 | Use batch/v1 for OpenShift 4 instead. |
Use the commands below to create a new job based on the definition above:
$ JOB_NAME="manual-$(date +%F-%H-%M)" oc -n rsync-test create job --from=cronjob/rsync-copy $JOB_NAME
$ oc -n rsync-test get po
NAME READY STATUS RESTARTS AGE
manual1-8975l 0/1 Completed 0 2m9s
rsync-destination-6fd76657d8-6fjss 1/1 Running 0 41m
rsync-source-957bf555c-68jmn 1/1 Running 0 5m5s
$ oc -n rsync-test delete job $JOB_NAME
Check the job status with the following command:
$ oc -n <namespace> get job <myjob> -o jsonpath={.status.succeeded}
Continuous Sync
The third option to migrate your storage to APPUiO Cloud consists in using a Continuous Sync strategy. The main benefit of this approach is that files are replicated immediately after they’re created. If the destination pod dies, the sync pod also crashes, but is automatically restarted.
Use the manifests below to create the required objects.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rsync-source
namespace: rsync-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: rsync-source (1)
name: rsync-source
namespace: rsync-test
spec:
selector:
matchLabels:
app.kubernetes.io/name: rsync-source
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: rsync-source
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- rsync-continuous-sync
topologyKey: kubernetes.io/hostname
containers:
- image: registry.access.redhat.com/rhel7/rhel-tools
imagePullPolicy: IfNotPresent
name: rhel-tools
command:
- tail
- -f
- /dev/null
volumeMounts:
- mountPath: /rsync-source
name: rsync-source
volumes:
- name: rsync-source
persistentVolumeClaim:
claimName: rsync-source
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: rsync-continuous-sync (1)
name: rsync-continuous-sync
namespace: rsync-test
spec:
selector:
matchLabels:
app.kubernetes.io/name: rsync-continuous-sync
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: rsync-continuous-sync
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/component
operator: In
values:
- backend
topologyKey: kubernetes.io/hostname
containers:
- image: quay.io/openshift/origin-cli:4.8
imagePullPolicy: IfNotPresent
name: oc-rsync
command:
- /bin/bash
- -c
- |
#!/bin/bash
oc \
--server=$K8S_API \
--token=$K8S_TOKEN \
--namespace=$K8S_NAMESPACE \
rsync \
--delete=true \
--watch=true \
/rsync-source/ \
"$(oc --server=$K8S_API --token=$K8S_TOKEN --namespace=$K8S_NAMESPACE get pod -l app.kubernetes.io/name=rsync-destination -o jsonpath={.items[0].metadata.name}):/rsync-destination/"
env:
- name: K8S_API
value: https://<kubernetes-api>:6443
- name: K8S_TOKEN
valueFrom:
secretKeyRef:
name: rsync-destination-oc-token
key: token
- name: K8S_NAMESPACE
value: rsync-test
volumeMounts:
- mountPath: /rsync-source
name: rsync-source
volumes:
- name: rsync-source
persistentVolumeClaim:
claimName: rsync-source
1 | Please refer to the Kubernetes documentation on common labels. |
Be aware that oc rsync has different options than rsync itself.
|
Options:
--compress=false: compress file data during the transfer
-c, --container='': Container within the pod
--delete=false: If true, delete files not present in source
--exclude=[]: When specified, exclude files matching pattern
--include=[]: When specified, include files matching pattern
--no-perms=false: If true, do not transfer permissions
--progress=false: If true, show progress during transfer
-q, --quiet=false: Suppress non-error messages
--strategy='': Specify which strategy to use for copy: rsync, rsync-daemon, or tar
-w, --watch=false: Watch directory for changes and resync automatically
File Integrity Check
After the migration, you should check the integrity of your data with the following commands:
$ find . -type d -exec sh -c "cd '{}' && find . -maxdepth 1 -type f ! -name COPYSHA1SUMS -printf '%P\0' | xargs -r0 sha1sum -- > COPYSHA1SUMS" \;
$ cd <path> && find . -type d -exec sh -c "cd '{}' && echo '{}' && sha1sum -c COPYSHA1SUMS" \; > sha1sums-verify-log-$(date +%F-%H-%M).log 2>&1
Container Images
Since APPUiO Cloud is based on OpenShift 4, there are new requirements for your container images. This section contains all the required steps for adapting your images to the new environment.
On OpenShift 3
This section uses the skopeo tool for managing images and repositories. |
-
Get a user token at this URL: <origin-cluster-console>/oauth/token/request.
-
Use the generated user token to authenticate to the registry on the command line. As the user token has enough privileges to read the image, a service account token isn’t required.
$ skopeo login -u openshift -p <token> <origin-url>
Skopeo uses the docker auth config. So this should look like:
$ cat ~/.docker/config.json { "auths": { "<origin-url>": { "auth": "...=" } } }
Check if the access is working:
$ skopeo inspect docker://<origin-url>/<namespace>/<image>:<image-tag>
This is a user token, and therefore it expires when you log out. |
On OpenShift 4
On OpenShift 4 it’s also possible to find the token from oauth-openshift.apps.<cluster>/oauth/token/display and get read access; but this token doesn’t grant enough privileges to write images.
Therefore, it’s recommended to create a service account, and to grant access to system:image-builders
, and finally to get the token from this service account.
$ oc -n <namespace> create sa image-upload
Get the token:
$ oc sa get-token -n <namespace> image-upload
Inspect the RoleBinding
:
$ oc -n <namespace> get rolebinding system:image-builders -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
# ...
name: system:image-builders
namespace: <namespace>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:image-builder
subjects:
# ...
- kind: ServiceAccount
name: image-upload
namespace: <namespace>
Login with the token:
$ skopeo login -u openshift -p $(oc -n <namespace> sa get-token image-upload) <destination-url>
Copy the image:
$ skopeo copy docker://<origin-url>/<namespace>/<image>:<image-tag> docker://<destination-url>/<namespace>/<image>:<image-tag>
Remember to remove the service account after the migration. |
OpenShift internal docker registry path
The OpenShift 4 internal docker registry is deployed in the namespace openshift-image-registry
.
This differs from OpenShift 3, where the internal docker registry was running in the default
namespace.
Due to that, the internal service name changed from docker-registry.default.svc:5000
to image-registry.openshift-image-registry.svc:5000
.
How it did look look like on APPUiO Public:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
name: ${CONTAINERNAME}
namespace: ${NAMESPACE}
spec:
template:
…
spec:
containers:
…
- image: docker-registry.default.svc:5000/${NAMESPACE}/${IMAGE}@sha256:${IMAGE_SHA}
…
triggers:
- imageChangeParams:
automatic: true
containerNames:
- ${CONTAINERNAME}
from:
kind: ImageStreamTag
name: ${IMAGE}:${IMAGE_TAG}
namespace: ${NAMESPACE}
lastTriggeredImage: docker-registry.default.svc:5000/${NAMESPACE}/${IMAGE}@sha256:${IMAGE_SHA}
type: ImageChange
And an example, how it should look like on APPUiO Cloud:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
name: ${CONTAINER_NAME}
namespace: ${NAMESPACE}
spec:
template:
…
spec:
containers:
…
- image: image-registry.openshift-image-registry.svc:5000/${NAMESPACE}/${IMAGE}@sha256:${IMAGE_SHA} (1)
…
triggers:
- imageChangeParams:
automatic: true
containerNames:
- ${CONTAINERNAME}
from:
kind: ImageStreamTag
name: ${IMAGE}:${IMAGE_TAG}
namespace: ${NAMESPACE}
lastTriggeredImage: image-registry.openshift-image-registry.svc:5000/${NAMESPACE}/${IMAGE}@sha256:${IMAGE_SHA} (2)
type: ImageChange
1 | Replace in spec.template.spec.containers[].image the string docker-registry.default.svc:5000 with image-registry.openshift-image-registry.svc:5000 . |
2 | Replace in spec.triggers.imageChangeParams[].lastTriggeredImage the string docker-registry.default.svc:5000 with image-registry.openshift-image-registry.svc:5000 . |