Investigate and resolve fair use ratio violations for a project

As of 2022-05-01, APPUiO Cloud is billing CPU requests which violate the fair use ratio. See the APPUiO Cloud pricing documentation for details.

The OpenShift Web Console provides you a dashboard showing all relevant data on requests, limits and effective usage. You find it at ObserveDashboard → Select Kubernetes / Compute Resources / Namespaces (Pods). The direct link is console.<zone>.appuio.cloud/dev-monitoring?dashboard=grafana-dashboard-k8s-resources-namespace.

Prerequisites

For this guide, it’s assumed that:

  • You are logged in to your APPUiO Cloud project using the oc login command.

  • You are familiar with Kubernetes' Quantity format.

  • You are familiar with the basic usage of oc (or kubectl)

Investigating CPU requests

Generally, for projects which violate the fair use ratio, APPUiO Cloud will show a warning whenever you create a new resource on the command line (for example with oc create).

The warnings show the cumulative memory-per-core ratio of your project.

Example warning when creating a deployment
W0504 16:36:06.622956  318587 warnings.go:70] Current memory to CPU ratio of 4000Mi/core in this namespace is below the fair use ratio of 4Gi/core

In this example, the project has a cumulative memory-per-core ratio of 4000Mi/core which is lower than the zone’s fair use ratio of 4Gi/core (4096Mi/core).

Additionally, APPUiO Cloud also emits Kubernetes Events for pods in projects which violate the fair use ratio. You can check resource ratio events emitted by APPUiO Cloud in your project with

oc get events --field-selector=source=resource-ratio-controller

APPUiO Cloud will emit a new event for each new Pod in a project which violates the fair use ratio.

Example event
118m        Warning   TooMuchCPURequest   pod/PODNAME   Current memory to CPU ratio of 4000Mi/core in this namespace is below the fair use ratio of 4Gi/core
Events disappear after some time.

Manually calculate memory-per-core ratio

You can follow the steps in this section to manually calculate the memory-per-core ratio of your project or individual workloads in your project.

The memory-per-core ratio of a project is computed only from pods in state Running.

The following command will list the CPU and memory requests for all containers of all pods in your project:

oc get pods --field-selector=status.phase=Running \
-ocustom-columns="NAME:.metadata.name,\
CPU REQUESTS:.spec.containers[*].resources.requests.cpu,\
MEMORY REQUESTS:.spec.containers[*].resources.requests.memory"
For pods with multiple containers, the CPU and memory requests of all containers are shown separated by commas.

You can use the usual methods to select a subset of pods in your project. For example, you can list only pods with a specific label by appending --selector=<labelkey>=<labelvalue> to the command.

You can calculate the cumulative memory-per-core ratio for your project using the following steps:

  1. To simplify the calculation, normalize all the shown values to full CPU cores (for example, multiply by 0.001 for CPU requests postfixed with m) and MiB (for example, multiply by 1024 for memory requests in Gi). See the Kubernetes documentation for details on the format used for resource requests.

  2. Sum up the values for CPU REQUESTS and MEMORY REQUESTS. The result of sum(MEMORY REQUESTS) / sum(CPU REQUESTS) is the workload ratio.

Alternatively you can sum up the values for CPU REQUESTS and MEMORY REQUESTS for each pod individually. In that case, you can then calculate each pod’s ratio by dividing the pod’s summed up MEMORY REQUESTS by the pod’s summed up CPU REQUESTS.

Example calculation

Let’s assume you get the following as the output of the oc get pods command above.

Listing of pods with their requests from oc get pods
NAME                                    CPU REQUESTS   MEMORY REQUESTS
acme-dns-655d74cb85-ztxr5               25m,25m        100Mi,100Mi (1)
acme-dns-655d74cb85-vl39b               25m,25m        100Mi,100Mi (1)
1 Requests for pods with multiple containers are shown as a comma-separated list in the output

First, you should normalize all values to full CPU cores and MiB (Mi). In the example, only the CPU requests have to be normalized, since all memory requests are already in MiB.

After normalization you end up with something like the following listing.

Normalized requests
NAME                                    CPU REQUESTS   MEMORY REQUESTS
acme-dns-655d74cb85-ztxr5               0.025,0.025    100,100
acme-dns-655d74cb85-vl39b               0.025,0.025    100,100

To calculate the memory-per-core ratio for a single pod, you can directly divide the sum of the pod’s memory requests by the sum of the pod’s CPU requests.

memory-per-core ratio for pod acme-dns-655d74cb85-ztxr5
CPU REQUESTS:     2*0.025 = 0.05
MEMORY REQUESTS:  2*100   = 200

RATIO:  200 / 0.05 = 4000

To calculate the project’s cumulative memory-per-core ratio, you can sum up the CPU requests and memory requests:

Summed requests for all pods in the project
CPU REQUESTS:    4*0.025 = 0.1
MEMORY REQUESTS: 4*100   = 400

Finally, you can calculate the memory-per-core ratio for the project by dividing the memory requests by the CPU requests:

memory-per-core ratio for the whole project
400 / 0.1 = 4000

Resolution

  • Make sure your deployments (and other resources which create workloads) configure CPU and memory requests which result in your workload using at least the displayed minimum memory CPU ratio for your zone in the portal.

To adjust the ratio of your workload to be at or above the displayed ratio you can:

  • Lower the CPU requests while leaving the memory requests unchanged

  • Raise the memory requests while leaving the CPU requests unchanged

  • Lower the CPU requests and raise the memory requests

See our documentation on workload requests and limits. Despite its location, this documentation also applies for workloads on APPUiO Cloud.

Additionally, you can also consult the Kubernetes documentation on managing resources for containers for details on configuring your workload’s requests.